00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2269 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3532 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.110 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.111 The recommended git tool is: git 00:00:00.111 using credential 00000000-0000-0000-0000-000000000002 00:00:00.113 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.172 Fetching changes from the remote Git repository 00:00:00.174 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.266 > git --version # 'git version 2.39.2' 00:00:00.266 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.292 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.292 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.991 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.001 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.011 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.011 > git config core.sparsecheckout # timeout=10 00:00:04.020 > git read-tree -mu HEAD # timeout=10 00:00:04.034 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.052 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.052 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:04.143 [Pipeline] Start of Pipeline 00:00:04.157 [Pipeline] library 00:00:04.158 Loading library shm_lib@master 00:00:04.158 Library shm_lib@master is cached. Copying from home. 00:00:04.176 [Pipeline] node 00:00:04.186 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.188 [Pipeline] { 00:00:04.199 [Pipeline] catchError 00:00:04.201 [Pipeline] { 00:00:04.216 [Pipeline] wrap 00:00:04.225 [Pipeline] { 00:00:04.234 [Pipeline] stage 00:00:04.236 [Pipeline] { (Prologue) 00:00:04.477 [Pipeline] sh 00:00:04.760 + logger -p user.info -t JENKINS-CI 00:00:04.781 [Pipeline] echo 00:00:04.783 Node: GP11 00:00:04.792 [Pipeline] sh 00:00:05.088 [Pipeline] setCustomBuildProperty 00:00:05.101 [Pipeline] echo 00:00:05.103 Cleanup processes 00:00:05.110 [Pipeline] sh 00:00:05.391 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.391 1368255 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.402 [Pipeline] sh 00:00:05.682 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.682 ++ awk '{print $1}' 00:00:05.682 ++ grep -v 'sudo pgrep' 00:00:05.682 + sudo kill -9 00:00:05.682 + true 00:00:05.694 [Pipeline] cleanWs 00:00:05.703 [WS-CLEANUP] Deleting project workspace... 00:00:05.703 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.709 [WS-CLEANUP] done 00:00:05.714 [Pipeline] setCustomBuildProperty 00:00:05.723 [Pipeline] sh 00:00:05.999 + sudo git config --global --replace-all safe.directory '*' 00:00:06.075 [Pipeline] httpRequest 00:00:06.431 [Pipeline] echo 00:00:06.432 Sorcerer 10.211.164.101 is alive 00:00:06.439 [Pipeline] retry 00:00:06.440 [Pipeline] { 00:00:06.451 [Pipeline] httpRequest 00:00:06.456 HttpMethod: GET 00:00:06.456 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.457 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:06.472 Response Code: HTTP/1.1 200 OK 00:00:06.472 Success: Status code 200 is in the accepted range: 200,404 00:00:06.473 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.593 [Pipeline] } 00:00:08.603 [Pipeline] // retry 00:00:08.607 [Pipeline] sh 00:00:08.880 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.899 [Pipeline] httpRequest 00:00:09.233 [Pipeline] echo 00:00:09.235 Sorcerer 10.211.164.101 is alive 00:00:09.246 [Pipeline] retry 00:00:09.248 [Pipeline] { 00:00:09.264 [Pipeline] httpRequest 00:00:09.268 HttpMethod: GET 00:00:09.268 URL: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:09.269 Sending request to url: http://10.211.164.101/packages/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:00:09.292 Response Code: HTTP/1.1 200 OK 00:00:09.292 Success: Status code 200 is in the accepted range: 200,404 00:00:09.293 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:01:35.067 [Pipeline] } 00:01:35.084 [Pipeline] // retry 00:01:35.092 [Pipeline] sh 00:01:35.374 + tar --no-same-owner -xf spdk_bbce7a87401bc737804431cd08d24fede99b1400.tar.gz 00:01:37.911 [Pipeline] sh 00:01:38.191 + git -C spdk log --oneline -n5 00:01:38.191 bbce7a874 event: move struct spdk_lw_thread to internal header 00:01:38.191 5031f0f3b module/raid: Assign bdev_io buffers to raid_io 00:01:38.191 dc3ea9d27 bdevperf: Allocate an md buffer for verify op 00:01:38.191 0ce363beb spdk_log: introduce spdk_log_ext API 00:01:38.191 412fced1b bdev/compress: unmap support. 00:01:38.207 [Pipeline] withCredentials 00:01:38.216 > git --version # timeout=10 00:01:38.227 > git --version # 'git version 2.39.2' 00:01:38.239 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:38.241 [Pipeline] { 00:01:38.250 [Pipeline] retry 00:01:38.252 [Pipeline] { 00:01:38.266 [Pipeline] sh 00:01:38.546 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:56.632 [Pipeline] } 00:01:56.649 [Pipeline] // retry 00:01:56.654 [Pipeline] } 00:01:56.671 [Pipeline] // withCredentials 00:01:56.681 [Pipeline] httpRequest 00:01:57.080 [Pipeline] echo 00:01:57.081 Sorcerer 10.211.164.101 is alive 00:01:57.090 [Pipeline] retry 00:01:57.092 [Pipeline] { 00:01:57.107 [Pipeline] httpRequest 00:01:57.111 HttpMethod: GET 00:01:57.112 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:57.112 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:57.114 Response Code: HTTP/1.1 200 OK 00:01:57.115 Success: Status code 200 is in the accepted range: 200,404 00:01:57.115 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:02.460 [Pipeline] } 00:02:02.479 [Pipeline] // retry 00:02:02.486 [Pipeline] sh 00:02:02.767 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:04.678 [Pipeline] sh 00:02:04.957 + git -C dpdk log --oneline -n5 00:02:04.957 caf0f5d395 version: 22.11.4 00:02:04.957 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:04.957 dc9c799c7d vhost: fix missing spinlock unlock 00:02:04.957 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:04.957 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:04.966 [Pipeline] } 00:02:04.979 [Pipeline] // stage 00:02:04.987 [Pipeline] stage 00:02:04.989 [Pipeline] { (Prepare) 00:02:05.007 [Pipeline] writeFile 00:02:05.022 [Pipeline] sh 00:02:05.301 + logger -p user.info -t JENKINS-CI 00:02:05.313 [Pipeline] sh 00:02:05.595 + logger -p user.info -t JENKINS-CI 00:02:05.606 [Pipeline] sh 00:02:05.885 + cat autorun-spdk.conf 00:02:05.885 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.885 SPDK_TEST_NVMF=1 00:02:05.885 SPDK_TEST_NVME_CLI=1 00:02:05.885 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.885 SPDK_TEST_NVMF_NICS=e810 00:02:05.885 SPDK_TEST_VFIOUSER=1 00:02:05.885 SPDK_RUN_UBSAN=1 00:02:05.885 NET_TYPE=phy 00:02:05.885 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:05.885 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.891 RUN_NIGHTLY=1 00:02:05.896 [Pipeline] readFile 00:02:05.919 [Pipeline] withEnv 00:02:05.922 [Pipeline] { 00:02:05.933 [Pipeline] sh 00:02:06.216 + set -ex 00:02:06.216 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:06.216 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:06.216 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.216 ++ SPDK_TEST_NVMF=1 00:02:06.216 ++ SPDK_TEST_NVME_CLI=1 00:02:06.216 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.216 ++ SPDK_TEST_NVMF_NICS=e810 00:02:06.216 ++ SPDK_TEST_VFIOUSER=1 00:02:06.216 ++ SPDK_RUN_UBSAN=1 00:02:06.216 ++ NET_TYPE=phy 00:02:06.216 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:06.216 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:06.216 ++ RUN_NIGHTLY=1 00:02:06.216 + case $SPDK_TEST_NVMF_NICS in 00:02:06.216 + DRIVERS=ice 00:02:06.216 + [[ tcp == \r\d\m\a ]] 00:02:06.216 + [[ -n ice ]] 00:02:06.216 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:06.216 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:06.216 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:06.216 rmmod: ERROR: Module irdma is not currently loaded 00:02:06.216 rmmod: ERROR: Module i40iw is not currently loaded 00:02:06.216 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:06.216 + true 00:02:06.216 + for D in $DRIVERS 00:02:06.216 + sudo modprobe ice 00:02:06.216 + exit 0 00:02:06.224 [Pipeline] } 00:02:06.235 [Pipeline] // withEnv 00:02:06.240 [Pipeline] } 00:02:06.252 [Pipeline] // stage 00:02:06.261 [Pipeline] catchError 00:02:06.262 [Pipeline] { 00:02:06.276 [Pipeline] timeout 00:02:06.276 Timeout set to expire in 1 hr 0 min 00:02:06.278 [Pipeline] { 00:02:06.291 [Pipeline] stage 00:02:06.293 [Pipeline] { (Tests) 00:02:06.306 [Pipeline] sh 00:02:06.587 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.587 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.587 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.587 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:06.587 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.587 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:06.587 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:06.587 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:06.587 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:06.587 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:06.587 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:06.587 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:06.587 + source /etc/os-release 00:02:06.587 ++ NAME='Fedora Linux' 00:02:06.587 ++ VERSION='39 (Cloud Edition)' 00:02:06.587 ++ ID=fedora 00:02:06.587 ++ VERSION_ID=39 00:02:06.587 ++ VERSION_CODENAME= 00:02:06.587 ++ PLATFORM_ID=platform:f39 00:02:06.587 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:06.587 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.587 ++ LOGO=fedora-logo-icon 00:02:06.587 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:06.587 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.587 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:06.587 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.587 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.587 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.587 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:06.587 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.587 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:06.587 ++ SUPPORT_END=2024-11-12 00:02:06.587 ++ VARIANT='Cloud Edition' 00:02:06.587 ++ VARIANT_ID=cloud 00:02:06.587 + uname -a 00:02:06.587 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:06.587 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:07.519 Hugepages 00:02:07.519 node hugesize free / total 00:02:07.519 node0 1048576kB 0 / 0 00:02:07.519 node0 2048kB 0 / 0 00:02:07.519 node1 1048576kB 0 / 0 00:02:07.519 node1 2048kB 0 / 0 00:02:07.519 00:02:07.519 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.519 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:07.519 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:07.519 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:07.519 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:07.519 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:07.777 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:07.777 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:07.777 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:07.777 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:07.777 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:07.777 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:07.777 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:07.777 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:07.777 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:07.778 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:07.778 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:07.778 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:07.778 + rm -f /tmp/spdk-ld-path 00:02:07.778 + source autorun-spdk.conf 00:02:07.778 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.778 ++ SPDK_TEST_NVMF=1 00:02:07.778 ++ SPDK_TEST_NVME_CLI=1 00:02:07.778 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.778 ++ SPDK_TEST_NVMF_NICS=e810 00:02:07.778 ++ SPDK_TEST_VFIOUSER=1 00:02:07.778 ++ SPDK_RUN_UBSAN=1 00:02:07.778 ++ NET_TYPE=phy 00:02:07.778 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:07.778 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.778 ++ RUN_NIGHTLY=1 00:02:07.778 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.778 + [[ -n '' ]] 00:02:07.778 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.778 + for M in /var/spdk/build-*-manifest.txt 00:02:07.778 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.778 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.778 + for M in /var/spdk/build-*-manifest.txt 00:02:07.778 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.778 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.778 + for M in /var/spdk/build-*-manifest.txt 00:02:07.778 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.778 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:07.778 ++ uname 00:02:07.778 + [[ Linux == \L\i\n\u\x ]] 00:02:07.778 + sudo dmesg -T 00:02:07.778 + sudo dmesg --clear 00:02:07.778 + dmesg_pid=1369595 00:02:07.778 + [[ Fedora Linux == FreeBSD ]] 00:02:07.778 + sudo dmesg -Tw 00:02:07.778 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.778 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.778 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.778 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.778 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.778 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.778 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.778 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.778 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.778 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.778 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.778 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.778 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.778 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.778 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:07.778 Test configuration: 00:02:07.778 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.778 SPDK_TEST_NVMF=1 00:02:07.778 SPDK_TEST_NVME_CLI=1 00:02:07.778 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.778 SPDK_TEST_NVMF_NICS=e810 00:02:07.778 SPDK_TEST_VFIOUSER=1 00:02:07.778 SPDK_RUN_UBSAN=1 00:02:07.778 NET_TYPE=phy 00:02:07.778 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:07.778 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.778 RUN_NIGHTLY=1 01:12:53 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:07.778 01:12:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:07.778 01:12:53 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:07.778 01:12:53 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.778 01:12:53 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.778 01:12:53 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.778 01:12:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.778 01:12:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.778 01:12:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.778 01:12:53 -- paths/export.sh@5 -- $ export PATH 00:02:07.778 01:12:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.778 01:12:53 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:07.778 01:12:53 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:07.778 01:12:53 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728774773.XXXXXX 00:02:07.778 01:12:53 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728774773.bHVu3M 00:02:07.778 01:12:53 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:07.778 01:12:53 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:02:07.778 01:12:53 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.778 01:12:53 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:07.778 01:12:53 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:07.778 01:12:53 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.778 01:12:53 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:07.778 01:12:53 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:07.778 01:12:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.778 01:12:53 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:07.778 01:12:53 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:07.778 01:12:53 -- pm/common@17 -- $ local monitor 00:02:07.778 01:12:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.778 01:12:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.778 01:12:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.778 01:12:53 -- pm/common@21 -- $ date +%s 00:02:07.778 01:12:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.778 01:12:53 -- pm/common@21 -- $ date +%s 00:02:07.778 01:12:53 -- pm/common@25 -- $ sleep 1 00:02:07.778 01:12:53 -- pm/common@21 -- $ date +%s 00:02:07.778 01:12:53 -- pm/common@21 -- $ date +%s 00:02:07.778 01:12:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728774773 00:02:07.778 01:12:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728774773 00:02:07.778 01:12:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728774773 00:02:07.778 01:12:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728774773 00:02:08.037 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728774773_collect-cpu-load.pm.log 00:02:08.037 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728774773_collect-cpu-temp.pm.log 00:02:08.037 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728774773_collect-vmstat.pm.log 00:02:08.037 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728774773_collect-bmc-pm.bmc.pm.log 00:02:08.970 01:12:54 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:08.970 01:12:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.970 01:12:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.970 01:12:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.970 01:12:54 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.970 Sat Oct 12 11:12:54 PM UTC 2024 00:02:08.970 01:12:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.970 v25.01-pre-55-gbbce7a874 00:02:08.970 01:12:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:08.971 01:12:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.971 01:12:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.971 01:12:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:08.971 01:12:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:08.971 01:12:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.971 ************************************ 00:02:08.971 START TEST ubsan 00:02:08.971 ************************************ 00:02:08.971 01:12:54 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:08.971 using ubsan 00:02:08.971 00:02:08.971 real 0m0.000s 00:02:08.971 user 0m0.000s 00:02:08.971 sys 0m0.000s 00:02:08.971 01:12:54 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:08.971 01:12:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.971 ************************************ 00:02:08.971 END TEST ubsan 00:02:08.971 ************************************ 00:02:08.971 01:12:54 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:08.971 01:12:54 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:08.971 01:12:54 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:08.971 01:12:54 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:08.971 01:12:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:08.971 01:12:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.971 ************************************ 00:02:08.971 START TEST build_native_dpdk 00:02:08.971 ************************************ 00:02:08.971 01:12:54 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:08.971 caf0f5d395 version: 22.11.4 00:02:08.971 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:08.971 dc9c799c7d vhost: fix missing spinlock unlock 00:02:08.971 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:08.971 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:08.971 patching file config/rte_config.h 00:02:08.971 Hunk #1 succeeded at 60 (offset 1 line). 00:02:08.971 01:12:54 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:08.971 01:12:54 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:08.972 01:12:54 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:08.972 patching file lib/pcapng/rte_pcapng.c 00:02:08.972 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:08.972 01:12:54 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:08.972 01:12:54 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:08.972 01:12:54 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:08.972 01:12:54 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:08.972 01:12:54 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:08.972 01:12:54 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:08.972 01:12:54 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:13.155 The Meson build system 00:02:13.155 Version: 1.5.0 00:02:13.155 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:13.155 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:13.155 Build type: native build 00:02:13.155 Program cat found: YES (/usr/bin/cat) 00:02:13.155 Project name: DPDK 00:02:13.155 Project version: 22.11.4 00:02:13.155 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.155 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:13.155 Host machine cpu family: x86_64 00:02:13.155 Host machine cpu: x86_64 00:02:13.155 Message: ## Building in Developer Mode ## 00:02:13.155 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:13.155 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:13.155 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:13.155 Program objdump found: YES (/usr/bin/objdump) 00:02:13.155 Program python3 found: YES (/usr/bin/python3) 00:02:13.155 Program cat found: YES (/usr/bin/cat) 00:02:13.155 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:13.155 Checking for size of "void *" : 8 00:02:13.155 Checking for size of "void *" : 8 (cached) 00:02:13.155 Library m found: YES 00:02:13.155 Library numa found: YES 00:02:13.155 Has header "numaif.h" : YES 00:02:13.155 Library fdt found: NO 00:02:13.155 Library execinfo found: NO 00:02:13.155 Has header "execinfo.h" : YES 00:02:13.155 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.155 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:13.155 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:13.155 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:13.155 Run-time dependency openssl found: YES 3.1.1 00:02:13.155 Run-time dependency libpcap found: YES 1.10.4 00:02:13.155 Has header "pcap.h" with dependency libpcap: YES 00:02:13.155 Compiler for C supports arguments -Wcast-qual: YES 00:02:13.155 Compiler for C supports arguments -Wdeprecated: YES 00:02:13.155 Compiler for C supports arguments -Wformat: YES 00:02:13.155 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:13.155 Compiler for C supports arguments -Wformat-security: NO 00:02:13.155 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:13.155 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:13.155 Compiler for C supports arguments -Wnested-externs: YES 00:02:13.155 Compiler for C supports arguments -Wold-style-definition: YES 00:02:13.155 Compiler for C supports arguments -Wpointer-arith: YES 00:02:13.155 Compiler for C supports arguments -Wsign-compare: YES 00:02:13.155 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:13.155 Compiler for C supports arguments -Wundef: YES 00:02:13.155 Compiler for C supports arguments -Wwrite-strings: YES 00:02:13.155 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:13.155 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:13.155 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:13.156 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:13.156 Compiler for C supports arguments -mavx512f: YES 00:02:13.156 Checking if "AVX512 checking" compiles: YES 00:02:13.156 Fetching value of define "__SSE4_2__" : 1 00:02:13.156 Fetching value of define "__AES__" : 1 00:02:13.156 Fetching value of define "__AVX__" : 1 00:02:13.156 Fetching value of define "__AVX2__" : (undefined) 00:02:13.156 Fetching value of define "__AVX512BW__" : (undefined) 00:02:13.156 Fetching value of define "__AVX512CD__" : (undefined) 00:02:13.156 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:13.156 Fetching value of define "__AVX512F__" : (undefined) 00:02:13.156 Fetching value of define "__AVX512VL__" : (undefined) 00:02:13.156 Fetching value of define "__PCLMUL__" : 1 00:02:13.156 Fetching value of define "__RDRND__" : 1 00:02:13.156 Fetching value of define "__RDSEED__" : (undefined) 00:02:13.156 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:13.156 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:13.156 Message: lib/kvargs: Defining dependency "kvargs" 00:02:13.156 Message: lib/telemetry: Defining dependency "telemetry" 00:02:13.156 Checking for function "getentropy" : YES 00:02:13.156 Message: lib/eal: Defining dependency "eal" 00:02:13.156 Message: lib/ring: Defining dependency "ring" 00:02:13.156 Message: lib/rcu: Defining dependency "rcu" 00:02:13.156 Message: lib/mempool: Defining dependency "mempool" 00:02:13.156 Message: lib/mbuf: Defining dependency "mbuf" 00:02:13.156 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:13.156 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:13.156 Compiler for C supports arguments -mpclmul: YES 00:02:13.156 Compiler for C supports arguments -maes: YES 00:02:13.156 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.156 Compiler for C supports arguments -mavx512bw: YES 00:02:13.156 Compiler for C supports arguments -mavx512dq: YES 00:02:13.156 Compiler for C supports arguments -mavx512vl: YES 00:02:13.156 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:13.156 Compiler for C supports arguments -mavx2: YES 00:02:13.156 Compiler for C supports arguments -mavx: YES 00:02:13.156 Message: lib/net: Defining dependency "net" 00:02:13.156 Message: lib/meter: Defining dependency "meter" 00:02:13.156 Message: lib/ethdev: Defining dependency "ethdev" 00:02:13.156 Message: lib/pci: Defining dependency "pci" 00:02:13.156 Message: lib/cmdline: Defining dependency "cmdline" 00:02:13.156 Message: lib/metrics: Defining dependency "metrics" 00:02:13.156 Message: lib/hash: Defining dependency "hash" 00:02:13.156 Message: lib/timer: Defining dependency "timer" 00:02:13.156 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:13.156 Compiler for C supports arguments -mavx2: YES (cached) 00:02:13.156 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:13.156 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:13.156 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:13.156 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:13.156 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:13.156 Message: lib/acl: Defining dependency "acl" 00:02:13.156 Message: lib/bbdev: Defining dependency "bbdev" 00:02:13.156 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:13.156 Run-time dependency libelf found: YES 0.191 00:02:13.156 Message: lib/bpf: Defining dependency "bpf" 00:02:13.156 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:13.156 Message: lib/compressdev: Defining dependency "compressdev" 00:02:13.156 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:13.156 Message: lib/distributor: Defining dependency "distributor" 00:02:13.156 Message: lib/efd: Defining dependency "efd" 00:02:13.156 Message: lib/eventdev: Defining dependency "eventdev" 00:02:13.156 Message: lib/gpudev: Defining dependency "gpudev" 00:02:13.156 Message: lib/gro: Defining dependency "gro" 00:02:13.156 Message: lib/gso: Defining dependency "gso" 00:02:13.156 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:13.156 Message: lib/jobstats: Defining dependency "jobstats" 00:02:13.156 Message: lib/latencystats: Defining dependency "latencystats" 00:02:13.156 Message: lib/lpm: Defining dependency "lpm" 00:02:13.156 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:13.156 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:13.156 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:13.156 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:13.156 Message: lib/member: Defining dependency "member" 00:02:13.156 Message: lib/pcapng: Defining dependency "pcapng" 00:02:13.156 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:13.156 Message: lib/power: Defining dependency "power" 00:02:13.156 Message: lib/rawdev: Defining dependency "rawdev" 00:02:13.156 Message: lib/regexdev: Defining dependency "regexdev" 00:02:13.156 Message: lib/dmadev: Defining dependency "dmadev" 00:02:13.156 Message: lib/rib: Defining dependency "rib" 00:02:13.156 Message: lib/reorder: Defining dependency "reorder" 00:02:13.156 Message: lib/sched: Defining dependency "sched" 00:02:13.156 Message: lib/security: Defining dependency "security" 00:02:13.156 Message: lib/stack: Defining dependency "stack" 00:02:13.156 Has header "linux/userfaultfd.h" : YES 00:02:13.156 Message: lib/vhost: Defining dependency "vhost" 00:02:13.156 Message: lib/ipsec: Defining dependency "ipsec" 00:02:13.156 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:13.156 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:13.156 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:13.156 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:13.156 Message: lib/fib: Defining dependency "fib" 00:02:13.156 Message: lib/port: Defining dependency "port" 00:02:13.156 Message: lib/pdump: Defining dependency "pdump" 00:02:13.156 Message: lib/table: Defining dependency "table" 00:02:13.156 Message: lib/pipeline: Defining dependency "pipeline" 00:02:13.156 Message: lib/graph: Defining dependency "graph" 00:02:13.156 Message: lib/node: Defining dependency "node" 00:02:13.156 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:13.156 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:13.156 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.156 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.156 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:13.156 Compiler for C supports arguments -Wno-unused-value: YES 00:02:15.063 Compiler for C supports arguments -Wno-format: YES 00:02:15.063 Compiler for C supports arguments -Wno-format-security: YES 00:02:15.063 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:15.063 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:15.063 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:15.063 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:15.063 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:15.063 Compiler for C supports arguments -mavx2: YES (cached) 00:02:15.063 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:15.063 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.063 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:15.063 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:15.063 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:15.063 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:15.063 Configuring doxy-api.conf using configuration 00:02:15.063 Program sphinx-build found: NO 00:02:15.063 Configuring rte_build_config.h using configuration 00:02:15.063 Message: 00:02:15.063 ================= 00:02:15.063 Applications Enabled 00:02:15.063 ================= 00:02:15.063 00:02:15.063 apps: 00:02:15.063 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:15.063 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:15.063 test-security-perf, 00:02:15.063 00:02:15.063 Message: 00:02:15.063 ================= 00:02:15.063 Libraries Enabled 00:02:15.063 ================= 00:02:15.063 00:02:15.063 libs: 00:02:15.063 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:15.063 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:15.063 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:15.063 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:15.063 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:15.063 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:15.063 table, pipeline, graph, node, 00:02:15.063 00:02:15.063 Message: 00:02:15.063 =============== 00:02:15.063 Drivers Enabled 00:02:15.063 =============== 00:02:15.063 00:02:15.063 common: 00:02:15.063 00:02:15.063 bus: 00:02:15.063 pci, vdev, 00:02:15.063 mempool: 00:02:15.063 ring, 00:02:15.063 dma: 00:02:15.063 00:02:15.063 net: 00:02:15.063 i40e, 00:02:15.063 raw: 00:02:15.063 00:02:15.063 crypto: 00:02:15.063 00:02:15.063 compress: 00:02:15.063 00:02:15.063 regex: 00:02:15.063 00:02:15.063 vdpa: 00:02:15.063 00:02:15.063 event: 00:02:15.063 00:02:15.063 baseband: 00:02:15.063 00:02:15.063 gpu: 00:02:15.063 00:02:15.063 00:02:15.063 Message: 00:02:15.063 ================= 00:02:15.063 Content Skipped 00:02:15.063 ================= 00:02:15.063 00:02:15.063 apps: 00:02:15.063 00:02:15.063 libs: 00:02:15.063 kni: explicitly disabled via build config (deprecated lib) 00:02:15.063 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:15.063 00:02:15.063 drivers: 00:02:15.063 common/cpt: not in enabled drivers build config 00:02:15.063 common/dpaax: not in enabled drivers build config 00:02:15.063 common/iavf: not in enabled drivers build config 00:02:15.063 common/idpf: not in enabled drivers build config 00:02:15.063 common/mvep: not in enabled drivers build config 00:02:15.063 common/octeontx: not in enabled drivers build config 00:02:15.063 bus/auxiliary: not in enabled drivers build config 00:02:15.063 bus/dpaa: not in enabled drivers build config 00:02:15.063 bus/fslmc: not in enabled drivers build config 00:02:15.063 bus/ifpga: not in enabled drivers build config 00:02:15.063 bus/vmbus: not in enabled drivers build config 00:02:15.063 common/cnxk: not in enabled drivers build config 00:02:15.063 common/mlx5: not in enabled drivers build config 00:02:15.063 common/qat: not in enabled drivers build config 00:02:15.063 common/sfc_efx: not in enabled drivers build config 00:02:15.063 mempool/bucket: not in enabled drivers build config 00:02:15.063 mempool/cnxk: not in enabled drivers build config 00:02:15.063 mempool/dpaa: not in enabled drivers build config 00:02:15.063 mempool/dpaa2: not in enabled drivers build config 00:02:15.063 mempool/octeontx: not in enabled drivers build config 00:02:15.063 mempool/stack: not in enabled drivers build config 00:02:15.063 dma/cnxk: not in enabled drivers build config 00:02:15.063 dma/dpaa: not in enabled drivers build config 00:02:15.063 dma/dpaa2: not in enabled drivers build config 00:02:15.063 dma/hisilicon: not in enabled drivers build config 00:02:15.063 dma/idxd: not in enabled drivers build config 00:02:15.063 dma/ioat: not in enabled drivers build config 00:02:15.063 dma/skeleton: not in enabled drivers build config 00:02:15.063 net/af_packet: not in enabled drivers build config 00:02:15.063 net/af_xdp: not in enabled drivers build config 00:02:15.063 net/ark: not in enabled drivers build config 00:02:15.063 net/atlantic: not in enabled drivers build config 00:02:15.063 net/avp: not in enabled drivers build config 00:02:15.063 net/axgbe: not in enabled drivers build config 00:02:15.063 net/bnx2x: not in enabled drivers build config 00:02:15.063 net/bnxt: not in enabled drivers build config 00:02:15.063 net/bonding: not in enabled drivers build config 00:02:15.063 net/cnxk: not in enabled drivers build config 00:02:15.063 net/cxgbe: not in enabled drivers build config 00:02:15.063 net/dpaa: not in enabled drivers build config 00:02:15.063 net/dpaa2: not in enabled drivers build config 00:02:15.063 net/e1000: not in enabled drivers build config 00:02:15.063 net/ena: not in enabled drivers build config 00:02:15.063 net/enetc: not in enabled drivers build config 00:02:15.063 net/enetfec: not in enabled drivers build config 00:02:15.063 net/enic: not in enabled drivers build config 00:02:15.063 net/failsafe: not in enabled drivers build config 00:02:15.063 net/fm10k: not in enabled drivers build config 00:02:15.063 net/gve: not in enabled drivers build config 00:02:15.063 net/hinic: not in enabled drivers build config 00:02:15.063 net/hns3: not in enabled drivers build config 00:02:15.063 net/iavf: not in enabled drivers build config 00:02:15.063 net/ice: not in enabled drivers build config 00:02:15.063 net/idpf: not in enabled drivers build config 00:02:15.063 net/igc: not in enabled drivers build config 00:02:15.063 net/ionic: not in enabled drivers build config 00:02:15.063 net/ipn3ke: not in enabled drivers build config 00:02:15.063 net/ixgbe: not in enabled drivers build config 00:02:15.063 net/kni: not in enabled drivers build config 00:02:15.063 net/liquidio: not in enabled drivers build config 00:02:15.063 net/mana: not in enabled drivers build config 00:02:15.063 net/memif: not in enabled drivers build config 00:02:15.063 net/mlx4: not in enabled drivers build config 00:02:15.064 net/mlx5: not in enabled drivers build config 00:02:15.064 net/mvneta: not in enabled drivers build config 00:02:15.064 net/mvpp2: not in enabled drivers build config 00:02:15.064 net/netvsc: not in enabled drivers build config 00:02:15.064 net/nfb: not in enabled drivers build config 00:02:15.064 net/nfp: not in enabled drivers build config 00:02:15.064 net/ngbe: not in enabled drivers build config 00:02:15.064 net/null: not in enabled drivers build config 00:02:15.064 net/octeontx: not in enabled drivers build config 00:02:15.064 net/octeon_ep: not in enabled drivers build config 00:02:15.064 net/pcap: not in enabled drivers build config 00:02:15.064 net/pfe: not in enabled drivers build config 00:02:15.064 net/qede: not in enabled drivers build config 00:02:15.064 net/ring: not in enabled drivers build config 00:02:15.064 net/sfc: not in enabled drivers build config 00:02:15.064 net/softnic: not in enabled drivers build config 00:02:15.064 net/tap: not in enabled drivers build config 00:02:15.064 net/thunderx: not in enabled drivers build config 00:02:15.064 net/txgbe: not in enabled drivers build config 00:02:15.064 net/vdev_netvsc: not in enabled drivers build config 00:02:15.064 net/vhost: not in enabled drivers build config 00:02:15.064 net/virtio: not in enabled drivers build config 00:02:15.064 net/vmxnet3: not in enabled drivers build config 00:02:15.064 raw/cnxk_bphy: not in enabled drivers build config 00:02:15.064 raw/cnxk_gpio: not in enabled drivers build config 00:02:15.064 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:15.064 raw/ifpga: not in enabled drivers build config 00:02:15.064 raw/ntb: not in enabled drivers build config 00:02:15.064 raw/skeleton: not in enabled drivers build config 00:02:15.064 crypto/armv8: not in enabled drivers build config 00:02:15.064 crypto/bcmfs: not in enabled drivers build config 00:02:15.064 crypto/caam_jr: not in enabled drivers build config 00:02:15.064 crypto/ccp: not in enabled drivers build config 00:02:15.064 crypto/cnxk: not in enabled drivers build config 00:02:15.064 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.064 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.064 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.064 crypto/mlx5: not in enabled drivers build config 00:02:15.064 crypto/mvsam: not in enabled drivers build config 00:02:15.064 crypto/nitrox: not in enabled drivers build config 00:02:15.064 crypto/null: not in enabled drivers build config 00:02:15.064 crypto/octeontx: not in enabled drivers build config 00:02:15.064 crypto/openssl: not in enabled drivers build config 00:02:15.064 crypto/scheduler: not in enabled drivers build config 00:02:15.064 crypto/uadk: not in enabled drivers build config 00:02:15.064 crypto/virtio: not in enabled drivers build config 00:02:15.064 compress/isal: not in enabled drivers build config 00:02:15.064 compress/mlx5: not in enabled drivers build config 00:02:15.064 compress/octeontx: not in enabled drivers build config 00:02:15.064 compress/zlib: not in enabled drivers build config 00:02:15.064 regex/mlx5: not in enabled drivers build config 00:02:15.064 regex/cn9k: not in enabled drivers build config 00:02:15.064 vdpa/ifc: not in enabled drivers build config 00:02:15.064 vdpa/mlx5: not in enabled drivers build config 00:02:15.064 vdpa/sfc: not in enabled drivers build config 00:02:15.064 event/cnxk: not in enabled drivers build config 00:02:15.064 event/dlb2: not in enabled drivers build config 00:02:15.064 event/dpaa: not in enabled drivers build config 00:02:15.064 event/dpaa2: not in enabled drivers build config 00:02:15.064 event/dsw: not in enabled drivers build config 00:02:15.064 event/opdl: not in enabled drivers build config 00:02:15.064 event/skeleton: not in enabled drivers build config 00:02:15.064 event/sw: not in enabled drivers build config 00:02:15.064 event/octeontx: not in enabled drivers build config 00:02:15.064 baseband/acc: not in enabled drivers build config 00:02:15.064 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:15.064 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:15.064 baseband/la12xx: not in enabled drivers build config 00:02:15.064 baseband/null: not in enabled drivers build config 00:02:15.064 baseband/turbo_sw: not in enabled drivers build config 00:02:15.064 gpu/cuda: not in enabled drivers build config 00:02:15.064 00:02:15.064 00:02:15.064 Build targets in project: 316 00:02:15.064 00:02:15.064 DPDK 22.11.4 00:02:15.064 00:02:15.064 User defined options 00:02:15.064 libdir : lib 00:02:15.064 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.064 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:15.064 c_link_args : 00:02:15.064 enable_docs : false 00:02:15.064 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:15.064 enable_kmods : false 00:02:15.064 machine : native 00:02:15.064 tests : false 00:02:15.064 00:02:15.064 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.064 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:15.064 01:13:00 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:15.064 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:15.064 [1/745] Generating lib/rte_kvargs_def with a custom command 00:02:15.064 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:02:15.064 [3/745] Generating lib/rte_telemetry_def with a custom command 00:02:15.064 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:02:15.064 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.064 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.064 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.064 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.064 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.064 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.064 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.064 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.064 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.064 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.064 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:15.064 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.325 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.325 [18/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.325 [19/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.325 [20/745] Linking static target lib/librte_kvargs.a 00:02:15.325 [21/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.325 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.325 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.325 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.325 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.325 [26/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.325 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:15.325 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.325 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.325 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:15.325 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:15.325 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.325 [33/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.325 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.325 [35/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.325 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.325 [37/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.325 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.325 [39/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.325 [40/745] Generating lib/rte_eal_def with a custom command 00:02:15.325 [41/745] Generating lib/rte_eal_mingw with a custom command 00:02:15.325 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.325 [43/745] Generating lib/rte_ring_def with a custom command 00:02:15.325 [44/745] Generating lib/rte_ring_mingw with a custom command 00:02:15.325 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:15.325 [46/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.325 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.325 [48/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.325 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.325 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.325 [51/745] Generating lib/rte_rcu_mingw with a custom command 00:02:15.325 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.325 [53/745] Generating lib/rte_rcu_def with a custom command 00:02:15.325 [54/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.325 [55/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:15.325 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.325 [57/745] Generating lib/rte_mempool_def with a custom command 00:02:15.325 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:02:15.325 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.325 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.325 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.325 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:15.325 [63/745] Generating lib/rte_mbuf_def with a custom command 00:02:15.325 [64/745] Generating lib/rte_mbuf_mingw with a custom command 00:02:15.325 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:15.590 [66/745] Generating lib/rte_net_def with a custom command 00:02:15.590 [67/745] Generating lib/rte_net_mingw with a custom command 00:02:15.590 [68/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:15.590 [69/745] Generating lib/rte_meter_def with a custom command 00:02:15.590 [70/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.590 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.590 [72/745] Generating lib/rte_meter_mingw with a custom command 00:02:15.590 [73/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:15.590 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.590 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.590 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.590 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.590 [78/745] Generating lib/rte_ethdev_def with a custom command 00:02:15.590 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.590 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:15.590 [81/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:15.590 [82/745] Linking static target lib/librte_ring.a 00:02:15.590 [83/745] Generating lib/rte_ethdev_mingw with a custom command 00:02:15.590 [84/745] Linking target lib/librte_kvargs.so.23.0 00:02:15.590 [85/745] Generating lib/rte_pci_def with a custom command 00:02:15.853 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:15.853 [87/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:15.853 [88/745] Linking static target lib/librte_meter.a 00:02:15.853 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.853 [90/745] Generating lib/rte_pci_mingw with a custom command 00:02:15.853 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.853 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:15.853 [93/745] Linking static target lib/librte_pci.a 00:02:15.853 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:15.853 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.853 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.853 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.853 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.115 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.115 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.115 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.115 [102/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.115 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.115 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.115 [105/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.115 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.115 [107/745] Generating lib/rte_cmdline_def with a custom command 00:02:16.115 [108/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.115 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.115 [110/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.115 [111/745] Linking static target lib/librte_telemetry.a 00:02:16.115 [112/745] Generating lib/rte_cmdline_mingw with a custom command 00:02:16.115 [113/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.115 [114/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.115 [115/745] Generating lib/rte_metrics_def with a custom command 00:02:16.115 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:02:16.115 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.373 [118/745] Generating lib/rte_hash_def with a custom command 00:02:16.373 [119/745] Generating lib/rte_hash_mingw with a custom command 00:02:16.373 [120/745] Generating lib/rte_timer_def with a custom command 00:02:16.373 [121/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.373 [122/745] Generating lib/rte_timer_mingw with a custom command 00:02:16.373 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:16.373 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.374 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.636 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:16.636 [127/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.636 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:16.636 [129/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.636 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.636 [131/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.636 [132/745] Generating lib/rte_acl_mingw with a custom command 00:02:16.636 [133/745] Generating lib/rte_acl_def with a custom command 00:02:16.636 [134/745] Generating lib/rte_bbdev_def with a custom command 00:02:16.636 [135/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.636 [136/745] Generating lib/rte_bbdev_mingw with a custom command 00:02:16.636 [137/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.636 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:02:16.636 [139/745] Generating lib/rte_bitratestats_mingw with a custom command 00:02:16.636 [140/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.636 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.636 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.636 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.894 [144/745] Linking target lib/librte_telemetry.so.23.0 00:02:16.894 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:16.894 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.894 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.894 [148/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.895 [149/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.895 [150/745] Generating lib/rte_bpf_def with a custom command 00:02:16.895 [151/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.895 [152/745] Generating lib/rte_bpf_mingw with a custom command 00:02:16.895 [153/745] Generating lib/rte_cfgfile_def with a custom command 00:02:16.895 [154/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.895 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.895 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:02:16.895 [157/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.895 [158/745] Generating lib/rte_compressdev_def with a custom command 00:02:16.895 [159/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:16.895 [160/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.895 [161/745] Generating lib/rte_compressdev_mingw with a custom command 00:02:17.157 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.157 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:02:17.157 [164/745] Generating lib/rte_cryptodev_mingw with a custom command 00:02:17.157 [165/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.157 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:17.157 [167/745] Linking static target lib/librte_rcu.a 00:02:17.157 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.157 [169/745] Generating lib/rte_distributor_def with a custom command 00:02:17.157 [170/745] Generating lib/rte_distributor_mingw with a custom command 00:02:17.157 [171/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.157 [172/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.157 [173/745] Linking static target lib/librte_timer.a 00:02:17.157 [174/745] Linking static target lib/librte_cmdline.a 00:02:17.157 [175/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.157 [176/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.157 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.157 [178/745] Generating lib/rte_efd_def with a custom command 00:02:17.157 [179/745] Linking static target lib/librte_net.a 00:02:17.157 [180/745] Generating lib/rte_efd_mingw with a custom command 00:02:17.157 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.417 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:17.417 [183/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:17.417 [184/745] Linking static target lib/librte_metrics.a 00:02:17.417 [185/745] Linking static target lib/librte_cfgfile.a 00:02:17.417 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.417 [187/745] Linking static target lib/librte_mempool.a 00:02:17.684 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.684 [189/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.684 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:17.684 [191/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:17.684 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:17.684 [193/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.684 [194/745] Linking static target lib/librte_eal.a 00:02:17.684 [195/745] Generating lib/rte_eventdev_def with a custom command 00:02:17.684 [196/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:17.946 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:02:17.946 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:17.946 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:17.946 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:17.946 [201/745] Generating lib/rte_gpudev_def with a custom command 00:02:17.946 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:17.946 [203/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.946 [204/745] Generating lib/rte_gpudev_mingw with a custom command 00:02:17.946 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:17.946 [206/745] Linking static target lib/librte_bitratestats.a 00:02:17.946 [207/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:17.946 [208/745] Generating lib/rte_gro_def with a custom command 00:02:17.946 [209/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.946 [210/745] Generating lib/rte_gro_mingw with a custom command 00:02:18.212 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:18.212 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:18.212 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:18.212 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:18.212 [215/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:18.212 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.212 [217/745] Generating lib/rte_gso_def with a custom command 00:02:18.479 [218/745] Generating lib/rte_gso_mingw with a custom command 00:02:18.479 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:18.479 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:18.479 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:18.479 [222/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:18.479 [223/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:18.479 [224/745] Linking static target lib/librte_bbdev.a 00:02:18.479 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:18.479 [226/745] Generating lib/rte_ip_frag_def with a custom command 00:02:18.479 [227/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.739 [228/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:18.739 [229/745] Generating lib/rte_ip_frag_mingw with a custom command 00:02:18.739 [230/745] Generating lib/rte_jobstats_def with a custom command 00:02:18.739 [231/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:18.739 [232/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.739 [233/745] Generating lib/rte_jobstats_mingw with a custom command 00:02:18.739 [234/745] Generating lib/rte_latencystats_def with a custom command 00:02:18.739 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:02:18.739 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:18.739 [237/745] Linking static target lib/librte_compressdev.a 00:02:18.739 [238/745] Generating lib/rte_lpm_mingw with a custom command 00:02:18.739 [239/745] Generating lib/rte_lpm_def with a custom command 00:02:18.739 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:18.739 [241/745] Linking static target lib/librte_jobstats.a 00:02:18.739 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:19.001 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:19.001 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:19.001 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:19.001 [246/745] Linking static target lib/librte_distributor.a 00:02:19.001 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:19.267 [248/745] Generating lib/rte_member_def with a custom command 00:02:19.267 [249/745] Generating lib/rte_member_mingw with a custom command 00:02:19.267 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:19.267 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:19.267 [252/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.267 [253/745] Generating lib/rte_pcapng_mingw with a custom command 00:02:19.267 [254/745] Generating lib/rte_pcapng_def with a custom command 00:02:19.528 [255/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:19.528 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:19.528 [257/745] Linking static target lib/librte_bpf.a 00:02:19.528 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:19.528 [259/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.528 [260/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:19.528 [261/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.528 [262/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:19.528 [263/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:19.528 [264/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:19.528 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:19.528 [266/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:19.528 [267/745] Generating lib/rte_power_def with a custom command 00:02:19.528 [268/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:19.528 [269/745] Generating lib/rte_power_mingw with a custom command 00:02:19.528 [270/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:19.528 [271/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:19.528 [272/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:19.528 [273/745] Linking static target lib/librte_gpudev.a 00:02:19.817 [274/745] Linking static target lib/librte_gro.a 00:02:19.817 [275/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:19.817 [276/745] Generating lib/rte_rawdev_def with a custom command 00:02:19.817 [277/745] Generating lib/rte_rawdev_mingw with a custom command 00:02:19.817 [278/745] Generating lib/rte_regexdev_def with a custom command 00:02:19.817 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:02:19.817 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:19.817 [281/745] Generating lib/rte_dmadev_def with a custom command 00:02:19.817 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:02:19.817 [283/745] Generating lib/rte_rib_def with a custom command 00:02:19.817 [284/745] Generating lib/rte_rib_mingw with a custom command 00:02:19.817 [285/745] Generating lib/rte_reorder_def with a custom command 00:02:20.099 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:20.099 [287/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:20.099 [288/745] Generating lib/rte_reorder_mingw with a custom command 00:02:20.099 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.099 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.099 [291/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:20.099 [292/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:20.099 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:20.099 [294/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:20.099 [295/745] Generating lib/rte_sched_def with a custom command 00:02:20.099 [296/745] Generating lib/rte_sched_mingw with a custom command 00:02:20.099 [297/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:20.099 [298/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.099 [299/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:20.099 [300/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:20.099 [301/745] Generating lib/rte_security_def with a custom command 00:02:20.099 [302/745] Generating lib/rte_security_mingw with a custom command 00:02:20.099 [303/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:20.099 [304/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:20.099 [305/745] Linking static target lib/librte_latencystats.a 00:02:20.381 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:20.381 [307/745] Generating lib/rte_stack_mingw with a custom command 00:02:20.381 [308/745] Generating lib/rte_stack_def with a custom command 00:02:20.381 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:20.381 [310/745] Linking static target lib/librte_rawdev.a 00:02:20.381 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:20.381 [312/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:20.381 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:20.381 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:20.381 [315/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:20.381 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:20.381 [317/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:20.381 [318/745] Linking static target lib/librte_stack.a 00:02:20.381 [319/745] Generating lib/rte_vhost_def with a custom command 00:02:20.381 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:02:20.381 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.381 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.381 [323/745] Linking static target lib/librte_dmadev.a 00:02:20.381 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:20.666 [325/745] Linking static target lib/librte_ip_frag.a 00:02:20.666 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.666 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:20.666 [328/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:20.666 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.666 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:02:20.666 [331/745] Generating lib/rte_ipsec_def with a custom command 00:02:20.935 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:20.935 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:20.935 [334/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.935 [335/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:20.935 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.935 [337/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.196 [338/745] Generating lib/rte_fib_def with a custom command 00:02:21.196 [339/745] Generating lib/rte_fib_mingw with a custom command 00:02:21.196 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:21.196 [341/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.196 [342/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:21.196 [343/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:21.196 [344/745] Linking static target lib/librte_regexdev.a 00:02:21.196 [345/745] Linking static target lib/librte_gso.a 00:02:21.455 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.455 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:21.455 [348/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:21.455 [349/745] Linking static target lib/librte_efd.a 00:02:21.455 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.719 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:21.719 [352/745] Linking static target lib/librte_pcapng.a 00:02:21.719 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:21.719 [354/745] Linking static target lib/librte_lpm.a 00:02:21.719 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:21.719 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:21.719 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:21.719 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.719 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.719 [360/745] Linking static target lib/librte_reorder.a 00:02:21.719 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.981 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.981 [363/745] Generating lib/rte_port_def with a custom command 00:02:21.981 [364/745] Generating lib/rte_port_mingw with a custom command 00:02:21.981 [365/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:21.981 [366/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.981 [367/745] Linking static target lib/acl/libavx2_tmp.a 00:02:21.981 [368/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:21.981 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:21.981 [370/745] Generating lib/rte_pdump_def with a custom command 00:02:21.981 [371/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:21.981 [372/745] Generating lib/rte_pdump_mingw with a custom command 00:02:21.981 [373/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.981 [374/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:21.981 [375/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:21.981 [376/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:21.981 [377/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:22.247 [378/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:22.247 [379/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:22.247 [380/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.248 [381/745] Linking static target lib/librte_security.a 00:02:22.248 [382/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.248 [383/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.248 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.248 [385/745] Linking static target lib/librte_power.a 00:02:22.248 [386/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.248 [387/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:22.248 [388/745] Linking static target lib/librte_hash.a 00:02:22.248 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.511 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:22.511 [391/745] Linking static target lib/librte_rib.a 00:02:22.511 [392/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:22.511 [393/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:22.511 [394/745] Linking static target lib/acl/libavx512_tmp.a 00:02:22.772 [395/745] Linking static target lib/librte_acl.a 00:02:22.772 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:22.772 [397/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:22.772 [398/745] Generating lib/rte_table_def with a custom command 00:02:22.772 [399/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:22.772 [400/745] Generating lib/rte_table_mingw with a custom command 00:02:23.038 [401/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.038 [402/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:23.038 [403/745] Linking static target lib/librte_ethdev.a 00:02:23.038 [404/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.298 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.298 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:23.298 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:23.298 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:23.298 [409/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:23.298 [410/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:23.298 [411/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:23.298 [412/745] Generating lib/rte_pipeline_def with a custom command 00:02:23.298 [413/745] Linking static target lib/librte_mbuf.a 00:02:23.298 [414/745] Generating lib/rte_pipeline_mingw with a custom command 00:02:23.298 [415/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:23.561 [416/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:23.561 [417/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:23.561 [418/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.561 [419/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:23.561 [420/745] Linking static target lib/librte_fib.a 00:02:23.561 [421/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.561 [422/745] Generating lib/rte_graph_def with a custom command 00:02:23.561 [423/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:23.561 [424/745] Generating lib/rte_graph_mingw with a custom command 00:02:23.561 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:23.561 [426/745] Linking static target lib/librte_member.a 00:02:23.825 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:23.825 [428/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:23.825 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:23.825 [430/745] Linking static target lib/librte_eventdev.a 00:02:23.825 [431/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.825 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:23.825 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:23.825 [434/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:23.825 [435/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:23.825 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:24.091 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:24.091 [438/745] Generating lib/rte_node_def with a custom command 00:02:24.091 [439/745] Generating lib/rte_node_mingw with a custom command 00:02:24.091 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.091 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.091 [442/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:24.091 [443/745] Linking static target lib/librte_sched.a 00:02:24.091 [444/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.349 [445/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:24.349 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:02:24.349 [447/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.349 [448/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:24.349 [449/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:24.350 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.350 [451/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:24.350 [452/745] Generating drivers/rte_bus_vdev_def with a custom command 00:02:24.350 [453/745] Generating drivers/rte_mempool_ring_def with a custom command 00:02:24.350 [454/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:24.350 [455/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.350 [456/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:24.350 [457/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:24.350 [458/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:24.350 [459/745] Linking static target lib/librte_cryptodev.a 00:02:24.350 [460/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:24.608 [461/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:24.608 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.608 [463/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:24.608 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.608 [465/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:24.608 [466/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:24.608 [467/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:24.608 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:24.608 [469/745] Linking static target lib/librte_pdump.a 00:02:24.608 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:24.869 [471/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:24.869 [472/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.869 [473/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.869 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:24.869 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.869 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.869 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:24.869 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:24.869 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:24.869 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:02:25.130 [481/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:25.131 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:25.131 [483/745] Linking static target lib/librte_table.a 00:02:25.131 [484/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:25.131 [485/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:25.131 [486/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.131 [487/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.131 [488/745] Linking static target drivers/librte_bus_vdev.a 00:02:25.131 [489/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:25.131 [490/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.131 [491/745] Linking static target lib/librte_ipsec.a 00:02:25.393 [492/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:25.393 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:25.393 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:25.655 [495/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:25.655 [496/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.655 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:25.655 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:25.655 [499/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:25.916 [500/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.916 [501/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:25.916 [502/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:25.916 [503/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:25.916 [504/745] Linking static target lib/librte_graph.a 00:02:25.916 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:25.916 [506/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:25.916 [507/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:25.916 [508/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:25.916 [509/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.916 [510/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.916 [511/745] Linking static target drivers/librte_bus_pci.a 00:02:25.916 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:26.180 [513/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.180 [514/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:26.447 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:26.709 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.709 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:26.709 [518/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.709 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:26.709 [520/745] Linking static target lib/librte_port.a 00:02:26.709 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:26.972 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:26.972 [523/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:26.972 [524/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:26.972 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:26.972 [526/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:27.233 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:27.233 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.233 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:27.496 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:27.496 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.496 [532/745] Linking static target drivers/librte_mempool_ring.a 00:02:27.496 [533/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:27.496 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:27.496 [535/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:27.496 [536/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:27.496 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:27.758 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:27.758 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.758 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:28.017 [541/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:28.017 [542/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.017 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:28.017 [544/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:28.277 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:28.277 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:28.542 [547/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:28.542 [548/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:28.542 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:28.542 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:28.542 [551/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:28.804 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:28.804 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:28.804 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:28.804 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:29.062 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:29.062 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:29.062 [558/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:29.322 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:29.322 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:29.589 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:29.589 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:29.589 [563/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:29.589 [564/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:29.589 [565/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:29.847 [566/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:29.847 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:29.847 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:29.847 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:29.847 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:30.108 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:30.108 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:30.108 [573/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:30.370 [574/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:30.370 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:30.370 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:30.370 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:30.370 [578/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.370 [579/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:30.370 [580/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:30.370 [581/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:30.628 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:30.628 [583/745] Linking target lib/librte_eal.so.23.0 00:02:30.628 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:30.628 [585/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:30.628 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:30.892 [587/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:30.892 [588/745] Linking target lib/librte_ring.so.23.0 00:02:30.892 [589/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.150 [590/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:31.150 [591/745] Linking target lib/librte_meter.so.23.0 00:02:31.150 [592/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:31.150 [593/745] Linking target lib/librte_pci.so.23.0 00:02:31.150 [594/745] Linking target lib/librte_timer.so.23.0 00:02:31.150 [595/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:31.150 [596/745] Linking target lib/librte_rcu.so.23.0 00:02:31.411 [597/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:31.411 [598/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:31.411 [599/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:31.411 [600/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:31.411 [601/745] Linking target lib/librte_mempool.so.23.0 00:02:31.411 [602/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:31.411 [603/745] Linking target lib/librte_jobstats.so.23.0 00:02:31.411 [604/745] Linking target lib/librte_cfgfile.so.23.0 00:02:31.411 [605/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:31.411 [606/745] Linking target lib/librte_acl.so.23.0 00:02:31.411 [607/745] Linking target lib/librte_rawdev.so.23.0 00:02:31.411 [608/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:31.411 [609/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:31.411 [610/745] Linking target lib/librte_stack.so.23.0 00:02:31.411 [611/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:31.411 [612/745] Linking target lib/librte_dmadev.so.23.0 00:02:31.411 [613/745] Linking target lib/librte_graph.so.23.0 00:02:31.670 [614/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:31.670 [615/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:31.670 [616/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:31.670 [617/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:31.670 [618/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:31.670 [619/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:31.670 [620/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:31.670 [621/745] Linking target lib/librte_rib.so.23.0 00:02:31.670 [622/745] Linking target lib/librte_mbuf.so.23.0 00:02:31.670 [623/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:31.670 [624/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:31.670 [625/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:31.670 [626/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:31.670 [627/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:31.929 [628/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:31.929 [629/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:31.929 [630/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:31.929 [631/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:31.929 [632/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:31.929 [633/745] Linking target lib/librte_fib.so.23.0 00:02:31.929 [634/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:31.929 [635/745] Linking target lib/librte_distributor.so.23.0 00:02:31.929 [636/745] Linking target lib/librte_reorder.so.23.0 00:02:31.929 [637/745] Linking target lib/librte_compressdev.so.23.0 00:02:31.929 [638/745] Linking target lib/librte_bbdev.so.23.0 00:02:31.929 [639/745] Linking target lib/librte_net.so.23.0 00:02:31.929 [640/745] Linking target lib/librte_gpudev.so.23.0 00:02:31.929 [641/745] Linking target lib/librte_regexdev.so.23.0 00:02:31.929 [642/745] Linking target lib/librte_sched.so.23.0 00:02:31.929 [643/745] Linking target lib/librte_cryptodev.so.23.0 00:02:32.187 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:32.187 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:32.187 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:32.187 [647/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:32.187 [648/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:32.187 [649/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:32.187 [650/745] Linking target lib/librte_hash.so.23.0 00:02:32.187 [651/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:32.187 [652/745] Linking target lib/librte_cmdline.so.23.0 00:02:32.187 [653/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:32.187 [654/745] Linking target lib/librte_ethdev.so.23.0 00:02:32.187 [655/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:32.187 [656/745] Linking target lib/librte_security.so.23.0 00:02:32.187 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:32.446 [658/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:32.446 [659/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:32.446 [660/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:32.446 [661/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:32.446 [662/745] Linking target lib/librte_efd.so.23.0 00:02:32.446 [663/745] Linking target lib/librte_member.so.23.0 00:02:32.446 [664/745] Linking target lib/librte_lpm.so.23.0 00:02:32.446 [665/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:32.446 [666/745] Linking target lib/librte_metrics.so.23.0 00:02:32.446 [667/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:32.446 [668/745] Linking target lib/librte_gro.so.23.0 00:02:32.446 [669/745] Linking target lib/librte_power.so.23.0 00:02:32.446 [670/745] Linking target lib/librte_pcapng.so.23.0 00:02:32.446 [671/745] Linking target lib/librte_ip_frag.so.23.0 00:02:32.446 [672/745] Linking target lib/librte_gso.so.23.0 00:02:32.446 [673/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:32.446 [674/745] Linking target lib/librte_bpf.so.23.0 00:02:32.446 [675/745] Linking target lib/librte_eventdev.so.23.0 00:02:32.446 [676/745] Linking target lib/librte_ipsec.so.23.0 00:02:32.446 [677/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:32.446 [678/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:32.704 [679/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:32.704 [680/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:32.704 [681/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:32.704 [682/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:32.704 [683/745] Linking target lib/librte_latencystats.so.23.0 00:02:32.704 [684/745] Linking target lib/librte_bitratestats.so.23.0 00:02:32.704 [685/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:32.704 [686/745] Linking target lib/librte_pdump.so.23.0 00:02:32.704 [687/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:32.704 [688/745] Linking target lib/librte_port.so.23.0 00:02:32.704 [689/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:32.961 [690/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:32.961 [691/745] Linking target lib/librte_table.so.23.0 00:02:32.961 [692/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:32.961 [693/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:32.961 [694/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:33.219 [695/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:33.219 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:33.219 [697/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:33.477 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:33.734 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:33.734 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:33.992 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:33.992 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:33.992 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:34.250 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:34.250 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:34.250 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:34.250 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:34.507 [708/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:34.507 [709/745] Linking static target drivers/librte_net_i40e.a 00:02:34.765 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:35.023 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.023 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:02:35.281 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:35.539 [714/745] Linking static target lib/librte_node.a 00:02:35.539 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.796 [716/745] Linking target lib/librte_node.so.23.0 00:02:36.054 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:36.620 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:37.553 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:45.664 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:17.742 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:17.742 [722/745] Linking static target lib/librte_vhost.a 00:03:17.742 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.742 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:27.745 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:27.745 [726/745] Linking static target lib/librte_pipeline.a 00:03:28.028 [727/745] Linking target app/dpdk-dumpcap 00:03:28.028 [728/745] Linking target app/dpdk-test-fib 00:03:28.028 [729/745] Linking target app/dpdk-test-cmdline 00:03:28.028 [730/745] Linking target app/dpdk-test-acl 00:03:28.028 [731/745] Linking target app/dpdk-test-sad 00:03:28.028 [732/745] Linking target app/dpdk-test-regex 00:03:28.028 [733/745] Linking target app/dpdk-test-gpudev 00:03:28.028 [734/745] Linking target app/dpdk-test-pipeline 00:03:28.028 [735/745] Linking target app/dpdk-pdump 00:03:28.028 [736/745] Linking target app/dpdk-test-security-perf 00:03:28.028 [737/745] Linking target app/dpdk-proc-info 00:03:28.028 [738/745] Linking target app/dpdk-test-flow-perf 00:03:28.028 [739/745] Linking target app/dpdk-test-eventdev 00:03:28.028 [740/745] Linking target app/dpdk-test-compress-perf 00:03:28.028 [741/745] Linking target app/dpdk-test-bbdev 00:03:28.028 [742/745] Linking target app/dpdk-test-crypto-perf 00:03:28.028 [743/745] Linking target app/dpdk-testpmd 00:03:29.928 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.928 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:29.928 01:14:15 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:29.928 01:14:15 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:29.928 01:14:15 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:29.928 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:30.186 [0/1] Installing files. 00:03:30.448 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.448 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:30.449 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.450 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.451 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.452 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:30.453 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:30.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:30.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:30.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:30.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:30.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:30.454 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:30.454 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:30.454 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:31.024 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:31.024 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:31.024 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.024 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:31.024 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.024 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.025 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.026 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.027 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:31.028 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:31.028 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:31.028 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:31.028 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:31.028 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:31.028 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:31.028 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:31.028 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:31.028 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:31.028 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:31.028 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:31.028 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:31.028 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:31.028 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:31.028 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:31.028 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:31.028 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:31.028 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:31.028 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:31.028 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:31.028 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:31.028 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:31.028 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:31.028 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:31.028 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:31.028 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:31.028 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:31.028 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:31.028 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:31.028 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:31.028 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:31.028 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:31.028 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:31.028 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:31.028 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:31.028 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:31.028 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:31.028 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:31.028 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:31.028 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:31.028 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:31.028 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:31.028 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:31.028 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:31.028 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:31.028 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:31.028 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:31.028 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:31.028 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:31.028 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:31.028 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:31.028 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:31.028 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:31.028 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:31.028 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:31.028 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:31.028 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:31.028 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:31.028 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:31.029 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:31.029 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:31.029 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:31.029 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:31.029 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:31.029 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:31.029 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:31.029 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:31.029 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:31.029 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:31.029 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:31.029 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:31.029 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:31.029 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:31.029 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:31.029 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:31.029 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:31.029 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:31.029 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:31.029 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:31.029 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:31.029 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:31.029 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:31.029 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:31.029 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:31.029 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:31.029 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:31.029 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:31.029 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:31.029 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:31.029 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:31.029 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:31.029 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:31.029 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:31.029 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:31.029 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:31.029 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:31.029 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:31.029 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:31.029 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:31.029 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:31.029 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:31.029 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:31.029 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:31.029 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:31.029 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:31.029 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:31.029 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:31.029 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:31.029 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:31.029 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:31.029 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:31.029 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:31.029 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:31.029 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:31.029 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:31.029 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:31.029 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:31.029 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:31.029 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:31.029 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:31.029 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:31.029 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:31.029 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:31.029 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:31.029 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:31.029 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:31.029 01:14:16 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:31.029 01:14:16 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.029 00:03:31.029 real 1m22.151s 00:03:31.029 user 14m25.309s 00:03:31.029 sys 1m49.772s 00:03:31.029 01:14:16 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:31.029 01:14:16 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:31.029 ************************************ 00:03:31.029 END TEST build_native_dpdk 00:03:31.029 ************************************ 00:03:31.287 01:14:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:31.287 01:14:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:31.287 01:14:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:31.287 01:14:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:31.287 01:14:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:31.287 01:14:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:31.287 01:14:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:31.287 01:14:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:31.287 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:31.287 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:31.287 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:31.287 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:31.546 Using 'verbs' RDMA provider 00:03:42.086 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:52.064 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:52.064 Creating mk/config.mk...done. 00:03:52.064 Creating mk/cc.flags.mk...done. 00:03:52.064 Type 'make' to build. 00:03:52.064 01:14:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:03:52.064 01:14:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:52.064 01:14:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:52.064 01:14:36 -- common/autotest_common.sh@10 -- $ set +x 00:03:52.064 ************************************ 00:03:52.064 START TEST make 00:03:52.064 ************************************ 00:03:52.064 01:14:36 make -- common/autotest_common.sh@1125 -- $ make -j48 00:03:52.064 make[1]: Nothing to be done for 'all'. 00:03:53.002 The Meson build system 00:03:53.002 Version: 1.5.0 00:03:53.002 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:53.002 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:53.002 Build type: native build 00:03:53.002 Project name: libvfio-user 00:03:53.002 Project version: 0.0.1 00:03:53.002 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:53.002 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:53.002 Host machine cpu family: x86_64 00:03:53.002 Host machine cpu: x86_64 00:03:53.002 Run-time dependency threads found: YES 00:03:53.002 Library dl found: YES 00:03:53.002 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:53.002 Run-time dependency json-c found: YES 0.17 00:03:53.002 Run-time dependency cmocka found: YES 1.1.7 00:03:53.002 Program pytest-3 found: NO 00:03:53.002 Program flake8 found: NO 00:03:53.002 Program misspell-fixer found: NO 00:03:53.002 Program restructuredtext-lint found: NO 00:03:53.002 Program valgrind found: YES (/usr/bin/valgrind) 00:03:53.002 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:53.002 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:53.002 Compiler for C supports arguments -Wwrite-strings: YES 00:03:53.002 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:53.002 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:53.002 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:53.002 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:53.002 Build targets in project: 8 00:03:53.002 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:53.002 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:53.002 00:03:53.002 libvfio-user 0.0.1 00:03:53.002 00:03:53.002 User defined options 00:03:53.002 buildtype : debug 00:03:53.002 default_library: shared 00:03:53.002 libdir : /usr/local/lib 00:03:53.002 00:03:53.002 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:53.946 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:54.214 [1/37] Compiling C object samples/null.p/null.c.o 00:03:54.214 [2/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:54.214 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:54.214 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:54.214 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:54.214 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:54.214 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:54.214 [8/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:54.214 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:54.214 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:54.214 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:54.214 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:54.214 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:54.214 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:54.214 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:54.214 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:54.214 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:54.214 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:54.478 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:54.478 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:54.478 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:54.478 [22/37] Compiling C object samples/server.p/server.c.o 00:03:54.478 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:54.478 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:54.478 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:54.478 [26/37] Compiling C object samples/client.p/client.c.o 00:03:54.478 [27/37] Linking target samples/client 00:03:54.478 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:54.478 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:54.737 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:54.737 [31/37] Linking target test/unit_tests 00:03:54.737 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:54.737 [33/37] Linking target samples/null 00:03:54.737 [34/37] Linking target samples/gpio-pci-idio-16 00:03:54.737 [35/37] Linking target samples/server 00:03:54.737 [36/37] Linking target samples/lspci 00:03:54.737 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:55.004 INFO: autodetecting backend as ninja 00:03:55.004 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:55.004 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:55.944 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:55.944 ninja: no work to do. 00:04:34.651 CC lib/ut_mock/mock.o 00:04:34.651 CC lib/ut/ut.o 00:04:34.651 CC lib/log/log.o 00:04:34.651 CC lib/log/log_flags.o 00:04:34.651 CC lib/log/log_deprecated.o 00:04:34.651 LIB libspdk_ut.a 00:04:34.651 LIB libspdk_ut_mock.a 00:04:34.651 LIB libspdk_log.a 00:04:34.651 SO libspdk_ut.so.2.0 00:04:34.651 SO libspdk_ut_mock.so.6.0 00:04:34.651 SO libspdk_log.so.7.1 00:04:34.651 SYMLINK libspdk_ut.so 00:04:34.651 SYMLINK libspdk_ut_mock.so 00:04:34.651 SYMLINK libspdk_log.so 00:04:34.651 CXX lib/trace_parser/trace.o 00:04:34.651 CC lib/ioat/ioat.o 00:04:34.651 CC lib/dma/dma.o 00:04:34.651 CC lib/util/base64.o 00:04:34.651 CC lib/util/bit_array.o 00:04:34.651 CC lib/util/cpuset.o 00:04:34.651 CC lib/util/crc16.o 00:04:34.651 CC lib/util/crc32.o 00:04:34.651 CC lib/util/crc32c.o 00:04:34.651 CC lib/util/crc32_ieee.o 00:04:34.651 CC lib/util/crc64.o 00:04:34.651 CC lib/util/dif.o 00:04:34.651 CC lib/util/fd.o 00:04:34.651 CC lib/util/fd_group.o 00:04:34.651 CC lib/util/file.o 00:04:34.651 CC lib/util/hexlify.o 00:04:34.651 CC lib/util/iov.o 00:04:34.651 CC lib/util/math.o 00:04:34.651 CC lib/util/net.o 00:04:34.651 CC lib/util/pipe.o 00:04:34.651 CC lib/util/strerror_tls.o 00:04:34.651 CC lib/util/string.o 00:04:34.651 CC lib/util/uuid.o 00:04:34.651 CC lib/util/xor.o 00:04:34.651 CC lib/util/zipf.o 00:04:34.651 CC lib/util/md5.o 00:04:34.651 CC lib/vfio_user/host/vfio_user_pci.o 00:04:34.651 CC lib/vfio_user/host/vfio_user.o 00:04:34.651 LIB libspdk_dma.a 00:04:34.651 LIB libspdk_ioat.a 00:04:34.651 SO libspdk_dma.so.5.0 00:04:34.651 SO libspdk_ioat.so.7.0 00:04:34.651 SYMLINK libspdk_dma.so 00:04:34.651 SYMLINK libspdk_ioat.so 00:04:34.651 LIB libspdk_vfio_user.a 00:04:34.651 SO libspdk_vfio_user.so.5.0 00:04:34.651 SYMLINK libspdk_vfio_user.so 00:04:34.651 LIB libspdk_util.a 00:04:34.651 SO libspdk_util.so.10.0 00:04:34.651 SYMLINK libspdk_util.so 00:04:34.651 CC lib/conf/conf.o 00:04:34.651 CC lib/rdma_provider/common.o 00:04:34.651 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:34.651 CC lib/json/json_parse.o 00:04:34.651 CC lib/rdma_utils/rdma_utils.o 00:04:34.651 CC lib/vmd/vmd.o 00:04:34.651 CC lib/json/json_util.o 00:04:34.651 CC lib/idxd/idxd.o 00:04:34.651 CC lib/vmd/led.o 00:04:34.651 CC lib/json/json_write.o 00:04:34.651 CC lib/idxd/idxd_user.o 00:04:34.651 CC lib/env_dpdk/env.o 00:04:34.651 CC lib/idxd/idxd_kernel.o 00:04:34.651 CC lib/env_dpdk/memory.o 00:04:34.651 CC lib/env_dpdk/pci.o 00:04:34.651 CC lib/env_dpdk/init.o 00:04:34.651 CC lib/env_dpdk/threads.o 00:04:34.651 CC lib/env_dpdk/pci_ioat.o 00:04:34.651 CC lib/env_dpdk/pci_virtio.o 00:04:34.651 CC lib/env_dpdk/pci_vmd.o 00:04:34.651 CC lib/env_dpdk/pci_idxd.o 00:04:34.651 CC lib/env_dpdk/pci_event.o 00:04:34.651 CC lib/env_dpdk/sigbus_handler.o 00:04:34.651 CC lib/env_dpdk/pci_dpdk.o 00:04:34.651 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:34.651 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:34.651 LIB libspdk_trace_parser.a 00:04:34.651 SO libspdk_trace_parser.so.6.0 00:04:34.651 LIB libspdk_rdma_provider.a 00:04:34.651 SO libspdk_rdma_provider.so.6.0 00:04:34.651 SYMLINK libspdk_trace_parser.so 00:04:34.651 SYMLINK libspdk_rdma_provider.so 00:04:34.651 LIB libspdk_rdma_utils.a 00:04:34.651 LIB libspdk_conf.a 00:04:34.651 SO libspdk_rdma_utils.so.1.0 00:04:34.651 SO libspdk_conf.so.6.0 00:04:34.651 SYMLINK libspdk_rdma_utils.so 00:04:34.651 SYMLINK libspdk_conf.so 00:04:34.651 LIB libspdk_json.a 00:04:34.651 SO libspdk_json.so.6.0 00:04:34.651 SYMLINK libspdk_json.so 00:04:34.651 LIB libspdk_idxd.a 00:04:34.651 SO libspdk_idxd.so.12.1 00:04:34.651 CC lib/jsonrpc/jsonrpc_server.o 00:04:34.651 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:34.651 CC lib/jsonrpc/jsonrpc_client.o 00:04:34.651 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:34.651 LIB libspdk_vmd.a 00:04:34.651 SYMLINK libspdk_idxd.so 00:04:34.652 SO libspdk_vmd.so.6.0 00:04:34.652 SYMLINK libspdk_vmd.so 00:04:34.652 LIB libspdk_jsonrpc.a 00:04:34.652 SO libspdk_jsonrpc.so.6.0 00:04:34.652 SYMLINK libspdk_jsonrpc.so 00:04:34.652 CC lib/rpc/rpc.o 00:04:34.652 LIB libspdk_rpc.a 00:04:34.652 SO libspdk_rpc.so.6.0 00:04:34.652 SYMLINK libspdk_rpc.so 00:04:34.652 CC lib/trace/trace.o 00:04:34.652 CC lib/trace/trace_flags.o 00:04:34.652 CC lib/trace/trace_rpc.o 00:04:34.652 CC lib/keyring/keyring.o 00:04:34.652 CC lib/keyring/keyring_rpc.o 00:04:34.652 CC lib/notify/notify.o 00:04:34.652 CC lib/notify/notify_rpc.o 00:04:34.652 LIB libspdk_notify.a 00:04:34.652 SO libspdk_notify.so.6.0 00:04:34.652 SYMLINK libspdk_notify.so 00:04:34.652 LIB libspdk_keyring.a 00:04:34.652 SO libspdk_keyring.so.2.0 00:04:34.652 LIB libspdk_trace.a 00:04:34.652 SO libspdk_trace.so.11.0 00:04:34.652 SYMLINK libspdk_keyring.so 00:04:34.652 SYMLINK libspdk_trace.so 00:04:34.652 LIB libspdk_env_dpdk.a 00:04:34.652 SO libspdk_env_dpdk.so.15.0 00:04:34.652 CC lib/thread/thread.o 00:04:34.652 CC lib/sock/sock.o 00:04:34.652 CC lib/thread/iobuf.o 00:04:34.652 CC lib/sock/sock_rpc.o 00:04:34.652 SYMLINK libspdk_env_dpdk.so 00:04:34.910 LIB libspdk_sock.a 00:04:34.910 SO libspdk_sock.so.10.0 00:04:35.169 SYMLINK libspdk_sock.so 00:04:35.169 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:35.169 CC lib/nvme/nvme_ctrlr.o 00:04:35.169 CC lib/nvme/nvme_fabric.o 00:04:35.169 CC lib/nvme/nvme_ns_cmd.o 00:04:35.169 CC lib/nvme/nvme_ns.o 00:04:35.169 CC lib/nvme/nvme_pcie_common.o 00:04:35.169 CC lib/nvme/nvme_pcie.o 00:04:35.169 CC lib/nvme/nvme_qpair.o 00:04:35.169 CC lib/nvme/nvme.o 00:04:35.169 CC lib/nvme/nvme_quirks.o 00:04:35.169 CC lib/nvme/nvme_transport.o 00:04:35.169 CC lib/nvme/nvme_discovery.o 00:04:35.169 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:35.169 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:35.169 CC lib/nvme/nvme_tcp.o 00:04:35.169 CC lib/nvme/nvme_opal.o 00:04:35.169 CC lib/nvme/nvme_io_msg.o 00:04:35.169 CC lib/nvme/nvme_poll_group.o 00:04:35.169 CC lib/nvme/nvme_zns.o 00:04:35.169 CC lib/nvme/nvme_stubs.o 00:04:35.169 CC lib/nvme/nvme_auth.o 00:04:35.169 CC lib/nvme/nvme_cuse.o 00:04:35.169 CC lib/nvme/nvme_vfio_user.o 00:04:35.169 CC lib/nvme/nvme_rdma.o 00:04:36.105 LIB libspdk_thread.a 00:04:36.363 SO libspdk_thread.so.10.2 00:04:36.363 SYMLINK libspdk_thread.so 00:04:36.363 CC lib/vfu_tgt/tgt_endpoint.o 00:04:36.363 CC lib/init/json_config.o 00:04:36.363 CC lib/fsdev/fsdev.o 00:04:36.363 CC lib/blob/blobstore.o 00:04:36.363 CC lib/init/subsystem.o 00:04:36.363 CC lib/accel/accel.o 00:04:36.363 CC lib/virtio/virtio.o 00:04:36.363 CC lib/vfu_tgt/tgt_rpc.o 00:04:36.363 CC lib/fsdev/fsdev_io.o 00:04:36.363 CC lib/accel/accel_rpc.o 00:04:36.363 CC lib/init/subsystem_rpc.o 00:04:36.363 CC lib/virtio/virtio_vhost_user.o 00:04:36.363 CC lib/blob/request.o 00:04:36.363 CC lib/blob/zeroes.o 00:04:36.363 CC lib/fsdev/fsdev_rpc.o 00:04:36.363 CC lib/accel/accel_sw.o 00:04:36.363 CC lib/init/rpc.o 00:04:36.363 CC lib/virtio/virtio_vfio_user.o 00:04:36.363 CC lib/virtio/virtio_pci.o 00:04:36.363 CC lib/blob/blob_bs_dev.o 00:04:36.621 LIB libspdk_init.a 00:04:36.879 SO libspdk_init.so.6.0 00:04:36.879 LIB libspdk_virtio.a 00:04:36.879 SYMLINK libspdk_init.so 00:04:36.879 SO libspdk_virtio.so.7.0 00:04:36.879 SYMLINK libspdk_virtio.so 00:04:36.879 LIB libspdk_vfu_tgt.a 00:04:36.879 SO libspdk_vfu_tgt.so.3.0 00:04:36.879 CC lib/event/app.o 00:04:36.879 CC lib/event/reactor.o 00:04:36.879 CC lib/event/log_rpc.o 00:04:36.879 CC lib/event/app_rpc.o 00:04:36.879 CC lib/event/scheduler_static.o 00:04:36.879 SYMLINK libspdk_vfu_tgt.so 00:04:37.137 LIB libspdk_fsdev.a 00:04:37.137 SO libspdk_fsdev.so.1.0 00:04:37.396 SYMLINK libspdk_fsdev.so 00:04:37.396 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:37.396 LIB libspdk_event.a 00:04:37.396 SO libspdk_event.so.15.0 00:04:37.654 SYMLINK libspdk_event.so 00:04:37.654 LIB libspdk_nvme.a 00:04:37.654 LIB libspdk_accel.a 00:04:37.654 SO libspdk_accel.so.16.0 00:04:37.912 SO libspdk_nvme.so.14.0 00:04:37.912 SYMLINK libspdk_accel.so 00:04:37.912 CC lib/bdev/bdev.o 00:04:37.912 CC lib/bdev/bdev_rpc.o 00:04:37.912 CC lib/bdev/bdev_zone.o 00:04:37.912 CC lib/bdev/part.o 00:04:37.912 CC lib/bdev/scsi_nvme.o 00:04:37.912 SYMLINK libspdk_nvme.so 00:04:38.170 LIB libspdk_fuse_dispatcher.a 00:04:38.170 SO libspdk_fuse_dispatcher.so.1.0 00:04:38.170 SYMLINK libspdk_fuse_dispatcher.so 00:04:39.545 LIB libspdk_blob.a 00:04:39.545 SO libspdk_blob.so.11.0 00:04:39.802 SYMLINK libspdk_blob.so 00:04:39.802 CC lib/lvol/lvol.o 00:04:39.802 CC lib/blobfs/blobfs.o 00:04:39.802 CC lib/blobfs/tree.o 00:04:40.735 LIB libspdk_bdev.a 00:04:40.735 SO libspdk_bdev.so.17.0 00:04:40.735 SYMLINK libspdk_bdev.so 00:04:40.735 LIB libspdk_lvol.a 00:04:40.735 SO libspdk_lvol.so.10.0 00:04:40.735 LIB libspdk_blobfs.a 00:04:40.735 SO libspdk_blobfs.so.10.0 00:04:41.003 SYMLINK libspdk_lvol.so 00:04:41.003 CC lib/nvmf/ctrlr.o 00:04:41.003 CC lib/nbd/nbd.o 00:04:41.003 CC lib/ublk/ublk.o 00:04:41.003 CC lib/nvmf/ctrlr_discovery.o 00:04:41.003 CC lib/scsi/dev.o 00:04:41.003 CC lib/ftl/ftl_core.o 00:04:41.003 CC lib/nbd/nbd_rpc.o 00:04:41.003 CC lib/ublk/ublk_rpc.o 00:04:41.003 CC lib/scsi/lun.o 00:04:41.003 CC lib/ftl/ftl_init.o 00:04:41.003 CC lib/scsi/port.o 00:04:41.003 CC lib/nvmf/ctrlr_bdev.o 00:04:41.003 CC lib/ftl/ftl_layout.o 00:04:41.003 CC lib/nvmf/subsystem.o 00:04:41.003 CC lib/scsi/scsi.o 00:04:41.003 CC lib/scsi/scsi_bdev.o 00:04:41.003 CC lib/nvmf/nvmf.o 00:04:41.003 CC lib/nvmf/nvmf_rpc.o 00:04:41.003 CC lib/ftl/ftl_debug.o 00:04:41.003 CC lib/scsi/scsi_pr.o 00:04:41.003 CC lib/nvmf/transport.o 00:04:41.003 CC lib/ftl/ftl_io.o 00:04:41.003 CC lib/scsi/scsi_rpc.o 00:04:41.003 CC lib/scsi/task.o 00:04:41.003 CC lib/ftl/ftl_sb.o 00:04:41.003 CC lib/ftl/ftl_l2p.o 00:04:41.003 CC lib/nvmf/tcp.o 00:04:41.003 CC lib/nvmf/stubs.o 00:04:41.003 CC lib/ftl/ftl_l2p_flat.o 00:04:41.003 CC lib/nvmf/mdns_server.o 00:04:41.003 CC lib/ftl/ftl_nv_cache.o 00:04:41.003 CC lib/nvmf/vfio_user.o 00:04:41.003 CC lib/ftl/ftl_band.o 00:04:41.003 CC lib/nvmf/rdma.o 00:04:41.003 CC lib/nvmf/auth.o 00:04:41.003 CC lib/ftl/ftl_band_ops.o 00:04:41.003 CC lib/ftl/ftl_writer.o 00:04:41.003 CC lib/ftl/ftl_rq.o 00:04:41.003 CC lib/ftl/ftl_reloc.o 00:04:41.003 CC lib/ftl/ftl_l2p_cache.o 00:04:41.003 CC lib/ftl/ftl_p2l.o 00:04:41.003 CC lib/ftl/ftl_p2l_log.o 00:04:41.003 CC lib/ftl/mngt/ftl_mngt.o 00:04:41.003 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:41.003 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:41.003 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:41.003 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:41.003 SYMLINK libspdk_blobfs.so 00:04:41.003 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:41.261 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:41.261 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:41.261 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:41.261 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:41.261 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:41.261 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:41.261 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:41.261 CC lib/ftl/utils/ftl_conf.o 00:04:41.261 CC lib/ftl/utils/ftl_md.o 00:04:41.261 CC lib/ftl/utils/ftl_mempool.o 00:04:41.261 CC lib/ftl/utils/ftl_bitmap.o 00:04:41.261 CC lib/ftl/utils/ftl_property.o 00:04:41.261 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:41.522 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:41.522 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:41.522 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:41.522 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:41.522 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:41.522 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:41.522 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:41.522 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:41.522 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:41.522 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:41.522 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:41.522 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:41.522 CC lib/ftl/base/ftl_base_dev.o 00:04:41.522 CC lib/ftl/base/ftl_base_bdev.o 00:04:41.522 CC lib/ftl/ftl_trace.o 00:04:41.783 LIB libspdk_nbd.a 00:04:41.783 SO libspdk_nbd.so.7.0 00:04:41.783 SYMLINK libspdk_nbd.so 00:04:41.783 LIB libspdk_scsi.a 00:04:41.783 SO libspdk_scsi.so.9.0 00:04:42.042 SYMLINK libspdk_scsi.so 00:04:42.042 LIB libspdk_ublk.a 00:04:42.042 SO libspdk_ublk.so.3.0 00:04:42.042 SYMLINK libspdk_ublk.so 00:04:42.042 CC lib/vhost/vhost.o 00:04:42.042 CC lib/iscsi/conn.o 00:04:42.042 CC lib/vhost/vhost_rpc.o 00:04:42.042 CC lib/iscsi/init_grp.o 00:04:42.042 CC lib/vhost/vhost_scsi.o 00:04:42.042 CC lib/vhost/vhost_blk.o 00:04:42.042 CC lib/iscsi/iscsi.o 00:04:42.042 CC lib/iscsi/param.o 00:04:42.042 CC lib/vhost/rte_vhost_user.o 00:04:42.042 CC lib/iscsi/portal_grp.o 00:04:42.042 CC lib/iscsi/tgt_node.o 00:04:42.042 CC lib/iscsi/iscsi_subsystem.o 00:04:42.042 CC lib/iscsi/iscsi_rpc.o 00:04:42.042 CC lib/iscsi/task.o 00:04:42.300 LIB libspdk_ftl.a 00:04:42.569 SO libspdk_ftl.so.9.0 00:04:42.830 SYMLINK libspdk_ftl.so 00:04:43.397 LIB libspdk_vhost.a 00:04:43.397 SO libspdk_vhost.so.8.0 00:04:43.397 SYMLINK libspdk_vhost.so 00:04:43.655 LIB libspdk_nvmf.a 00:04:43.655 SO libspdk_nvmf.so.19.0 00:04:43.655 LIB libspdk_iscsi.a 00:04:43.655 SO libspdk_iscsi.so.8.0 00:04:43.920 SYMLINK libspdk_nvmf.so 00:04:43.920 SYMLINK libspdk_iscsi.so 00:04:44.305 CC module/env_dpdk/env_dpdk_rpc.o 00:04:44.305 CC module/vfu_device/vfu_virtio.o 00:04:44.305 CC module/vfu_device/vfu_virtio_blk.o 00:04:44.305 CC module/vfu_device/vfu_virtio_scsi.o 00:04:44.305 CC module/vfu_device/vfu_virtio_rpc.o 00:04:44.305 CC module/vfu_device/vfu_virtio_fs.o 00:04:44.305 CC module/accel/ioat/accel_ioat.o 00:04:44.305 CC module/accel/ioat/accel_ioat_rpc.o 00:04:44.305 CC module/keyring/file/keyring.o 00:04:44.305 CC module/keyring/linux/keyring.o 00:04:44.305 CC module/keyring/file/keyring_rpc.o 00:04:44.305 CC module/keyring/linux/keyring_rpc.o 00:04:44.305 CC module/accel/iaa/accel_iaa.o 00:04:44.305 CC module/blob/bdev/blob_bdev.o 00:04:44.305 CC module/accel/dsa/accel_dsa.o 00:04:44.305 CC module/accel/iaa/accel_iaa_rpc.o 00:04:44.305 CC module/sock/posix/posix.o 00:04:44.305 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:44.305 CC module/accel/dsa/accel_dsa_rpc.o 00:04:44.305 CC module/fsdev/aio/fsdev_aio.o 00:04:44.305 CC module/accel/error/accel_error.o 00:04:44.305 CC module/accel/error/accel_error_rpc.o 00:04:44.305 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:44.305 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:44.305 CC module/fsdev/aio/linux_aio_mgr.o 00:04:44.305 CC module/scheduler/gscheduler/gscheduler.o 00:04:44.305 LIB libspdk_env_dpdk_rpc.a 00:04:44.305 SO libspdk_env_dpdk_rpc.so.6.0 00:04:44.305 LIB libspdk_keyring_file.a 00:04:44.305 LIB libspdk_scheduler_gscheduler.a 00:04:44.305 SYMLINK libspdk_env_dpdk_rpc.so 00:04:44.305 SO libspdk_keyring_file.so.2.0 00:04:44.305 LIB libspdk_scheduler_dpdk_governor.a 00:04:44.305 SO libspdk_scheduler_gscheduler.so.4.0 00:04:44.305 LIB libspdk_accel_ioat.a 00:04:44.305 LIB libspdk_keyring_linux.a 00:04:44.305 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:44.305 SO libspdk_keyring_linux.so.1.0 00:04:44.305 SO libspdk_accel_ioat.so.6.0 00:04:44.305 LIB libspdk_accel_iaa.a 00:04:44.305 SYMLINK libspdk_keyring_file.so 00:04:44.305 SYMLINK libspdk_scheduler_gscheduler.so 00:04:44.577 LIB libspdk_scheduler_dynamic.a 00:04:44.577 SO libspdk_accel_iaa.so.3.0 00:04:44.577 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:44.577 SO libspdk_scheduler_dynamic.so.4.0 00:04:44.577 SYMLINK libspdk_keyring_linux.so 00:04:44.577 SYMLINK libspdk_accel_ioat.so 00:04:44.577 LIB libspdk_accel_error.a 00:04:44.577 LIB libspdk_blob_bdev.a 00:04:44.577 LIB libspdk_accel_dsa.a 00:04:44.577 SYMLINK libspdk_accel_iaa.so 00:04:44.577 SO libspdk_accel_error.so.2.0 00:04:44.577 SO libspdk_blob_bdev.so.11.0 00:04:44.577 SYMLINK libspdk_scheduler_dynamic.so 00:04:44.577 SO libspdk_accel_dsa.so.5.0 00:04:44.577 SYMLINK libspdk_accel_error.so 00:04:44.577 SYMLINK libspdk_blob_bdev.so 00:04:44.577 SYMLINK libspdk_accel_dsa.so 00:04:44.835 LIB libspdk_vfu_device.a 00:04:44.835 SO libspdk_vfu_device.so.3.0 00:04:44.835 CC module/bdev/nvme/bdev_nvme.o 00:04:44.835 CC module/bdev/lvol/vbdev_lvol.o 00:04:44.835 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:44.835 CC module/bdev/passthru/vbdev_passthru.o 00:04:44.835 CC module/blobfs/bdev/blobfs_bdev.o 00:04:44.835 CC module/bdev/null/bdev_null.o 00:04:44.835 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:44.835 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:44.835 CC module/bdev/delay/vbdev_delay.o 00:04:44.835 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:44.835 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:44.835 CC module/bdev/null/bdev_null_rpc.o 00:04:44.835 CC module/bdev/iscsi/bdev_iscsi.o 00:04:44.835 CC module/bdev/nvme/nvme_rpc.o 00:04:44.835 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:44.835 CC module/bdev/gpt/gpt.o 00:04:44.835 CC module/bdev/nvme/bdev_mdns_client.o 00:04:44.835 CC module/bdev/error/vbdev_error.o 00:04:44.835 CC module/bdev/malloc/bdev_malloc.o 00:04:44.835 CC module/bdev/error/vbdev_error_rpc.o 00:04:44.835 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:44.835 CC module/bdev/gpt/vbdev_gpt.o 00:04:44.835 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:44.835 CC module/bdev/nvme/vbdev_opal.o 00:04:44.835 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:44.835 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:44.835 CC module/bdev/split/vbdev_split.o 00:04:44.836 CC module/bdev/split/vbdev_split_rpc.o 00:04:44.836 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:44.836 CC module/bdev/raid/bdev_raid.o 00:04:44.836 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:44.836 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:44.836 CC module/bdev/raid/bdev_raid_rpc.o 00:04:44.836 CC module/bdev/raid/bdev_raid_sb.o 00:04:44.836 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:44.836 CC module/bdev/raid/raid0.o 00:04:44.836 CC module/bdev/raid/raid1.o 00:04:44.836 CC module/bdev/ftl/bdev_ftl.o 00:04:44.836 CC module/bdev/raid/concat.o 00:04:44.836 CC module/bdev/aio/bdev_aio.o 00:04:44.836 CC module/bdev/aio/bdev_aio_rpc.o 00:04:44.836 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:44.836 SYMLINK libspdk_vfu_device.so 00:04:45.094 LIB libspdk_sock_posix.a 00:04:45.094 LIB libspdk_fsdev_aio.a 00:04:45.095 SO libspdk_sock_posix.so.6.0 00:04:45.095 SO libspdk_fsdev_aio.so.1.0 00:04:45.095 LIB libspdk_bdev_gpt.a 00:04:45.353 LIB libspdk_blobfs_bdev.a 00:04:45.353 SO libspdk_bdev_gpt.so.6.0 00:04:45.353 SO libspdk_blobfs_bdev.so.6.0 00:04:45.353 SYMLINK libspdk_fsdev_aio.so 00:04:45.353 LIB libspdk_bdev_split.a 00:04:45.353 LIB libspdk_bdev_ftl.a 00:04:45.353 SYMLINK libspdk_sock_posix.so 00:04:45.353 SO libspdk_bdev_ftl.so.6.0 00:04:45.353 SYMLINK libspdk_bdev_gpt.so 00:04:45.353 SO libspdk_bdev_split.so.6.0 00:04:45.353 SYMLINK libspdk_blobfs_bdev.so 00:04:45.353 LIB libspdk_bdev_null.a 00:04:45.353 LIB libspdk_bdev_error.a 00:04:45.353 SYMLINK libspdk_bdev_ftl.so 00:04:45.353 SYMLINK libspdk_bdev_split.so 00:04:45.353 SO libspdk_bdev_null.so.6.0 00:04:45.353 SO libspdk_bdev_error.so.6.0 00:04:45.353 LIB libspdk_bdev_passthru.a 00:04:45.353 LIB libspdk_bdev_aio.a 00:04:45.353 SO libspdk_bdev_aio.so.6.0 00:04:45.353 SO libspdk_bdev_passthru.so.6.0 00:04:45.353 LIB libspdk_bdev_zone_block.a 00:04:45.353 SYMLINK libspdk_bdev_error.so 00:04:45.353 SYMLINK libspdk_bdev_null.so 00:04:45.353 LIB libspdk_bdev_malloc.a 00:04:45.353 SO libspdk_bdev_zone_block.so.6.0 00:04:45.353 SO libspdk_bdev_malloc.so.6.0 00:04:45.353 SYMLINK libspdk_bdev_passthru.so 00:04:45.353 SYMLINK libspdk_bdev_aio.so 00:04:45.611 SYMLINK libspdk_bdev_zone_block.so 00:04:45.611 LIB libspdk_bdev_delay.a 00:04:45.611 LIB libspdk_bdev_iscsi.a 00:04:45.611 SYMLINK libspdk_bdev_malloc.so 00:04:45.611 SO libspdk_bdev_delay.so.6.0 00:04:45.612 SO libspdk_bdev_iscsi.so.6.0 00:04:45.612 LIB libspdk_bdev_virtio.a 00:04:45.612 LIB libspdk_bdev_lvol.a 00:04:45.612 SO libspdk_bdev_virtio.so.6.0 00:04:45.612 SYMLINK libspdk_bdev_delay.so 00:04:45.612 SYMLINK libspdk_bdev_iscsi.so 00:04:45.612 SO libspdk_bdev_lvol.so.6.0 00:04:45.612 SYMLINK libspdk_bdev_virtio.so 00:04:45.612 SYMLINK libspdk_bdev_lvol.so 00:04:46.180 LIB libspdk_bdev_raid.a 00:04:46.180 SO libspdk_bdev_raid.so.6.0 00:04:46.180 SYMLINK libspdk_bdev_raid.so 00:04:47.118 LIB libspdk_bdev_nvme.a 00:04:47.118 SO libspdk_bdev_nvme.so.7.0 00:04:47.377 SYMLINK libspdk_bdev_nvme.so 00:04:47.635 CC module/event/subsystems/vmd/vmd.o 00:04:47.635 CC module/event/subsystems/iobuf/iobuf.o 00:04:47.635 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:47.635 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:47.635 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:47.635 CC module/event/subsystems/keyring/keyring.o 00:04:47.635 CC module/event/subsystems/scheduler/scheduler.o 00:04:47.635 CC module/event/subsystems/fsdev/fsdev.o 00:04:47.635 CC module/event/subsystems/sock/sock.o 00:04:47.635 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:47.895 LIB libspdk_event_keyring.a 00:04:47.895 LIB libspdk_event_vhost_blk.a 00:04:47.895 LIB libspdk_event_fsdev.a 00:04:47.895 LIB libspdk_event_scheduler.a 00:04:47.895 LIB libspdk_event_vfu_tgt.a 00:04:47.895 LIB libspdk_event_vmd.a 00:04:47.895 LIB libspdk_event_sock.a 00:04:47.895 SO libspdk_event_keyring.so.1.0 00:04:47.895 LIB libspdk_event_iobuf.a 00:04:47.895 SO libspdk_event_vfu_tgt.so.3.0 00:04:47.895 SO libspdk_event_scheduler.so.4.0 00:04:47.895 SO libspdk_event_fsdev.so.1.0 00:04:47.895 SO libspdk_event_vhost_blk.so.3.0 00:04:47.895 SO libspdk_event_sock.so.5.0 00:04:47.895 SO libspdk_event_vmd.so.6.0 00:04:47.895 SO libspdk_event_iobuf.so.3.0 00:04:47.895 SYMLINK libspdk_event_keyring.so 00:04:47.895 SYMLINK libspdk_event_fsdev.so 00:04:47.895 SYMLINK libspdk_event_vhost_blk.so 00:04:47.895 SYMLINK libspdk_event_vfu_tgt.so 00:04:47.895 SYMLINK libspdk_event_scheduler.so 00:04:47.895 SYMLINK libspdk_event_sock.so 00:04:47.895 SYMLINK libspdk_event_vmd.so 00:04:47.895 SYMLINK libspdk_event_iobuf.so 00:04:48.154 CC module/event/subsystems/accel/accel.o 00:04:48.154 LIB libspdk_event_accel.a 00:04:48.154 SO libspdk_event_accel.so.6.0 00:04:48.413 SYMLINK libspdk_event_accel.so 00:04:48.413 CC module/event/subsystems/bdev/bdev.o 00:04:48.670 LIB libspdk_event_bdev.a 00:04:48.670 SO libspdk_event_bdev.so.6.0 00:04:48.670 SYMLINK libspdk_event_bdev.so 00:04:48.929 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:48.929 CC module/event/subsystems/nbd/nbd.o 00:04:48.929 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:48.929 CC module/event/subsystems/ublk/ublk.o 00:04:48.929 CC module/event/subsystems/scsi/scsi.o 00:04:48.929 LIB libspdk_event_ublk.a 00:04:48.929 LIB libspdk_event_nbd.a 00:04:49.187 SO libspdk_event_ublk.so.3.0 00:04:49.187 LIB libspdk_event_scsi.a 00:04:49.187 SO libspdk_event_nbd.so.6.0 00:04:49.187 SO libspdk_event_scsi.so.6.0 00:04:49.187 SYMLINK libspdk_event_ublk.so 00:04:49.187 SYMLINK libspdk_event_nbd.so 00:04:49.187 SYMLINK libspdk_event_scsi.so 00:04:49.187 LIB libspdk_event_nvmf.a 00:04:49.187 SO libspdk_event_nvmf.so.6.0 00:04:49.187 SYMLINK libspdk_event_nvmf.so 00:04:49.187 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:49.187 CC module/event/subsystems/iscsi/iscsi.o 00:04:49.446 LIB libspdk_event_vhost_scsi.a 00:04:49.446 LIB libspdk_event_iscsi.a 00:04:49.446 SO libspdk_event_iscsi.so.6.0 00:04:49.446 SO libspdk_event_vhost_scsi.so.3.0 00:04:49.446 SYMLINK libspdk_event_iscsi.so 00:04:49.446 SYMLINK libspdk_event_vhost_scsi.so 00:04:49.704 SO libspdk.so.6.0 00:04:49.704 SYMLINK libspdk.so 00:04:49.966 CC app/trace_record/trace_record.o 00:04:49.966 CC app/spdk_nvme_perf/perf.o 00:04:49.966 CC app/spdk_lspci/spdk_lspci.o 00:04:49.966 CC app/spdk_top/spdk_top.o 00:04:49.966 CC app/spdk_nvme_discover/discovery_aer.o 00:04:49.966 CC app/spdk_nvme_identify/identify.o 00:04:49.966 CXX app/trace/trace.o 00:04:49.966 CC test/rpc_client/rpc_client_test.o 00:04:49.966 TEST_HEADER include/spdk/accel.h 00:04:49.966 TEST_HEADER include/spdk/accel_module.h 00:04:49.966 TEST_HEADER include/spdk/assert.h 00:04:49.966 TEST_HEADER include/spdk/barrier.h 00:04:49.966 TEST_HEADER include/spdk/base64.h 00:04:49.966 TEST_HEADER include/spdk/bdev.h 00:04:49.966 TEST_HEADER include/spdk/bdev_module.h 00:04:49.966 TEST_HEADER include/spdk/bdev_zone.h 00:04:49.966 TEST_HEADER include/spdk/bit_array.h 00:04:49.966 TEST_HEADER include/spdk/bit_pool.h 00:04:49.966 TEST_HEADER include/spdk/blob_bdev.h 00:04:49.966 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:49.966 TEST_HEADER include/spdk/blobfs.h 00:04:49.966 TEST_HEADER include/spdk/blob.h 00:04:49.966 TEST_HEADER include/spdk/conf.h 00:04:49.966 TEST_HEADER include/spdk/config.h 00:04:49.966 TEST_HEADER include/spdk/cpuset.h 00:04:49.966 TEST_HEADER include/spdk/crc16.h 00:04:49.966 TEST_HEADER include/spdk/crc32.h 00:04:49.966 TEST_HEADER include/spdk/crc64.h 00:04:49.966 TEST_HEADER include/spdk/dif.h 00:04:49.966 TEST_HEADER include/spdk/dma.h 00:04:49.966 TEST_HEADER include/spdk/endian.h 00:04:49.966 TEST_HEADER include/spdk/env_dpdk.h 00:04:49.966 TEST_HEADER include/spdk/env.h 00:04:49.966 TEST_HEADER include/spdk/event.h 00:04:49.966 TEST_HEADER include/spdk/fd.h 00:04:49.966 TEST_HEADER include/spdk/fd_group.h 00:04:49.966 TEST_HEADER include/spdk/file.h 00:04:49.966 TEST_HEADER include/spdk/fsdev.h 00:04:49.966 TEST_HEADER include/spdk/fsdev_module.h 00:04:49.966 TEST_HEADER include/spdk/ftl.h 00:04:49.966 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:49.966 TEST_HEADER include/spdk/gpt_spec.h 00:04:49.966 TEST_HEADER include/spdk/hexlify.h 00:04:49.966 TEST_HEADER include/spdk/histogram_data.h 00:04:49.966 TEST_HEADER include/spdk/idxd.h 00:04:49.966 TEST_HEADER include/spdk/idxd_spec.h 00:04:49.966 TEST_HEADER include/spdk/init.h 00:04:49.966 TEST_HEADER include/spdk/ioat.h 00:04:49.966 TEST_HEADER include/spdk/ioat_spec.h 00:04:49.966 TEST_HEADER include/spdk/iscsi_spec.h 00:04:49.966 TEST_HEADER include/spdk/json.h 00:04:49.966 TEST_HEADER include/spdk/jsonrpc.h 00:04:49.966 TEST_HEADER include/spdk/keyring.h 00:04:49.966 TEST_HEADER include/spdk/keyring_module.h 00:04:49.966 TEST_HEADER include/spdk/log.h 00:04:49.966 TEST_HEADER include/spdk/likely.h 00:04:49.966 TEST_HEADER include/spdk/lvol.h 00:04:49.967 TEST_HEADER include/spdk/md5.h 00:04:49.967 TEST_HEADER include/spdk/memory.h 00:04:49.967 TEST_HEADER include/spdk/mmio.h 00:04:49.967 TEST_HEADER include/spdk/nbd.h 00:04:49.967 TEST_HEADER include/spdk/net.h 00:04:49.967 TEST_HEADER include/spdk/notify.h 00:04:49.967 TEST_HEADER include/spdk/nvme.h 00:04:49.967 TEST_HEADER include/spdk/nvme_intel.h 00:04:49.967 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:49.967 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:49.967 TEST_HEADER include/spdk/nvme_spec.h 00:04:49.967 TEST_HEADER include/spdk/nvme_zns.h 00:04:49.967 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:49.967 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:49.967 TEST_HEADER include/spdk/nvmf.h 00:04:49.967 TEST_HEADER include/spdk/nvmf_spec.h 00:04:49.967 TEST_HEADER include/spdk/nvmf_transport.h 00:04:49.967 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:49.967 TEST_HEADER include/spdk/opal.h 00:04:49.967 TEST_HEADER include/spdk/opal_spec.h 00:04:49.967 TEST_HEADER include/spdk/pci_ids.h 00:04:49.967 TEST_HEADER include/spdk/pipe.h 00:04:49.967 TEST_HEADER include/spdk/queue.h 00:04:49.967 TEST_HEADER include/spdk/reduce.h 00:04:49.967 TEST_HEADER include/spdk/rpc.h 00:04:49.967 TEST_HEADER include/spdk/scheduler.h 00:04:49.967 TEST_HEADER include/spdk/scsi.h 00:04:49.967 TEST_HEADER include/spdk/scsi_spec.h 00:04:49.967 TEST_HEADER include/spdk/sock.h 00:04:49.967 TEST_HEADER include/spdk/stdinc.h 00:04:49.967 TEST_HEADER include/spdk/string.h 00:04:49.967 TEST_HEADER include/spdk/thread.h 00:04:49.967 TEST_HEADER include/spdk/trace.h 00:04:49.967 TEST_HEADER include/spdk/trace_parser.h 00:04:49.967 TEST_HEADER include/spdk/tree.h 00:04:49.967 TEST_HEADER include/spdk/ublk.h 00:04:49.967 TEST_HEADER include/spdk/util.h 00:04:49.967 TEST_HEADER include/spdk/uuid.h 00:04:49.967 TEST_HEADER include/spdk/version.h 00:04:49.967 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:49.967 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:49.967 TEST_HEADER include/spdk/vhost.h 00:04:49.967 TEST_HEADER include/spdk/vmd.h 00:04:49.967 TEST_HEADER include/spdk/xor.h 00:04:49.967 TEST_HEADER include/spdk/zipf.h 00:04:49.967 CXX test/cpp_headers/accel.o 00:04:49.967 CXX test/cpp_headers/accel_module.o 00:04:49.967 CXX test/cpp_headers/assert.o 00:04:49.967 CXX test/cpp_headers/barrier.o 00:04:49.967 CC app/spdk_dd/spdk_dd.o 00:04:49.967 CXX test/cpp_headers/base64.o 00:04:49.967 CXX test/cpp_headers/bdev.o 00:04:49.967 CXX test/cpp_headers/bdev_module.o 00:04:49.967 CXX test/cpp_headers/bdev_zone.o 00:04:49.967 CXX test/cpp_headers/bit_array.o 00:04:49.967 CXX test/cpp_headers/bit_pool.o 00:04:49.967 CXX test/cpp_headers/blob_bdev.o 00:04:49.967 CC app/iscsi_tgt/iscsi_tgt.o 00:04:49.967 CXX test/cpp_headers/blobfs_bdev.o 00:04:49.967 CXX test/cpp_headers/blobfs.o 00:04:49.967 CXX test/cpp_headers/blob.o 00:04:49.967 CXX test/cpp_headers/conf.o 00:04:49.967 CXX test/cpp_headers/config.o 00:04:49.967 CXX test/cpp_headers/cpuset.o 00:04:49.967 CXX test/cpp_headers/crc16.o 00:04:49.967 CC app/nvmf_tgt/nvmf_main.o 00:04:49.967 CXX test/cpp_headers/crc32.o 00:04:49.967 CC examples/ioat/verify/verify.o 00:04:49.967 CC examples/ioat/perf/perf.o 00:04:49.967 CC examples/util/zipf/zipf.o 00:04:49.967 CC test/thread/poller_perf/poller_perf.o 00:04:49.967 CC test/app/stub/stub.o 00:04:49.967 CC test/app/histogram_perf/histogram_perf.o 00:04:49.967 CC test/env/vtophys/vtophys.o 00:04:49.967 CC test/env/memory/memory_ut.o 00:04:49.967 CC test/app/jsoncat/jsoncat.o 00:04:49.967 CC app/spdk_tgt/spdk_tgt.o 00:04:49.967 CC app/fio/nvme/fio_plugin.o 00:04:49.967 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:49.967 CC test/env/pci/pci_ut.o 00:04:49.967 CC test/dma/test_dma/test_dma.o 00:04:49.967 CC test/app/bdev_svc/bdev_svc.o 00:04:49.967 CC app/fio/bdev/fio_plugin.o 00:04:50.231 LINK spdk_lspci 00:04:50.231 CC test/env/mem_callbacks/mem_callbacks.o 00:04:50.231 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:50.231 LINK rpc_client_test 00:04:50.231 LINK interrupt_tgt 00:04:50.231 LINK spdk_nvme_discover 00:04:50.231 LINK poller_perf 00:04:50.231 LINK vtophys 00:04:50.495 LINK histogram_perf 00:04:50.495 LINK jsoncat 00:04:50.495 LINK zipf 00:04:50.495 CXX test/cpp_headers/crc64.o 00:04:50.495 CXX test/cpp_headers/dif.o 00:04:50.495 CXX test/cpp_headers/dma.o 00:04:50.495 CXX test/cpp_headers/endian.o 00:04:50.495 LINK spdk_trace_record 00:04:50.495 CXX test/cpp_headers/env_dpdk.o 00:04:50.495 CXX test/cpp_headers/env.o 00:04:50.495 CXX test/cpp_headers/event.o 00:04:50.495 CXX test/cpp_headers/fd_group.o 00:04:50.495 LINK env_dpdk_post_init 00:04:50.495 LINK stub 00:04:50.495 CXX test/cpp_headers/fd.o 00:04:50.495 CXX test/cpp_headers/file.o 00:04:50.495 CXX test/cpp_headers/fsdev.o 00:04:50.495 CXX test/cpp_headers/fsdev_module.o 00:04:50.495 LINK nvmf_tgt 00:04:50.495 LINK iscsi_tgt 00:04:50.495 CXX test/cpp_headers/ftl.o 00:04:50.495 CXX test/cpp_headers/fuse_dispatcher.o 00:04:50.495 LINK verify 00:04:50.495 LINK ioat_perf 00:04:50.495 CXX test/cpp_headers/gpt_spec.o 00:04:50.495 CXX test/cpp_headers/hexlify.o 00:04:50.495 CXX test/cpp_headers/histogram_data.o 00:04:50.495 LINK bdev_svc 00:04:50.495 LINK spdk_tgt 00:04:50.495 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:50.495 CXX test/cpp_headers/idxd.o 00:04:50.495 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:50.754 LINK mem_callbacks 00:04:50.754 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:50.754 CXX test/cpp_headers/idxd_spec.o 00:04:50.754 CXX test/cpp_headers/init.o 00:04:50.754 CXX test/cpp_headers/ioat.o 00:04:50.754 CXX test/cpp_headers/ioat_spec.o 00:04:50.754 CXX test/cpp_headers/iscsi_spec.o 00:04:50.754 CXX test/cpp_headers/json.o 00:04:50.754 LINK spdk_dd 00:04:50.754 CXX test/cpp_headers/jsonrpc.o 00:04:50.754 CXX test/cpp_headers/keyring.o 00:04:50.754 LINK spdk_trace 00:04:50.754 CXX test/cpp_headers/keyring_module.o 00:04:50.754 LINK pci_ut 00:04:50.754 CXX test/cpp_headers/likely.o 00:04:50.754 CXX test/cpp_headers/log.o 00:04:50.754 CXX test/cpp_headers/lvol.o 00:04:50.754 CXX test/cpp_headers/md5.o 00:04:50.754 CXX test/cpp_headers/memory.o 00:04:50.754 CXX test/cpp_headers/mmio.o 00:04:50.754 CXX test/cpp_headers/nbd.o 00:04:50.754 CXX test/cpp_headers/net.o 00:04:50.754 CXX test/cpp_headers/notify.o 00:04:50.754 CXX test/cpp_headers/nvme.o 00:04:50.754 CXX test/cpp_headers/nvme_intel.o 00:04:50.754 CXX test/cpp_headers/nvme_ocssd.o 00:04:51.016 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:51.016 CXX test/cpp_headers/nvme_spec.o 00:04:51.016 CXX test/cpp_headers/nvme_zns.o 00:04:51.016 CXX test/cpp_headers/nvmf_cmd.o 00:04:51.016 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:51.016 CC test/event/event_perf/event_perf.o 00:04:51.016 CC test/event/reactor/reactor.o 00:04:51.016 CC test/event/reactor_perf/reactor_perf.o 00:04:51.016 CXX test/cpp_headers/nvmf.o 00:04:51.016 CXX test/cpp_headers/nvmf_spec.o 00:04:51.016 CXX test/cpp_headers/nvmf_transport.o 00:04:51.016 CC examples/sock/hello_world/hello_sock.o 00:04:51.016 CXX test/cpp_headers/opal.o 00:04:51.016 CXX test/cpp_headers/opal_spec.o 00:04:51.016 CC examples/vmd/lsvmd/lsvmd.o 00:04:51.016 CC test/event/app_repeat/app_repeat.o 00:04:51.016 CXX test/cpp_headers/pci_ids.o 00:04:51.016 LINK nvme_fuzz 00:04:51.016 CC examples/vmd/led/led.o 00:04:51.016 CXX test/cpp_headers/pipe.o 00:04:51.016 LINK spdk_nvme 00:04:51.016 LINK spdk_bdev 00:04:51.016 LINK test_dma 00:04:51.017 CXX test/cpp_headers/queue.o 00:04:51.017 CC examples/idxd/perf/perf.o 00:04:51.017 CC examples/thread/thread/thread_ex.o 00:04:51.278 CC test/event/scheduler/scheduler.o 00:04:51.278 CXX test/cpp_headers/reduce.o 00:04:51.278 CXX test/cpp_headers/rpc.o 00:04:51.278 CXX test/cpp_headers/scheduler.o 00:04:51.278 CXX test/cpp_headers/scsi.o 00:04:51.278 CXX test/cpp_headers/scsi_spec.o 00:04:51.278 CXX test/cpp_headers/sock.o 00:04:51.278 CXX test/cpp_headers/stdinc.o 00:04:51.278 CXX test/cpp_headers/string.o 00:04:51.278 CXX test/cpp_headers/thread.o 00:04:51.278 CXX test/cpp_headers/trace.o 00:04:51.278 CXX test/cpp_headers/trace_parser.o 00:04:51.278 LINK event_perf 00:04:51.278 CXX test/cpp_headers/tree.o 00:04:51.278 LINK reactor 00:04:51.278 CXX test/cpp_headers/ublk.o 00:04:51.278 CXX test/cpp_headers/util.o 00:04:51.278 LINK reactor_perf 00:04:51.278 CXX test/cpp_headers/uuid.o 00:04:51.278 CXX test/cpp_headers/version.o 00:04:51.278 CXX test/cpp_headers/vfio_user_pci.o 00:04:51.278 CXX test/cpp_headers/vfio_user_spec.o 00:04:51.278 LINK lsvmd 00:04:51.278 CXX test/cpp_headers/vhost.o 00:04:51.278 CXX test/cpp_headers/vmd.o 00:04:51.278 CC app/vhost/vhost.o 00:04:51.278 CXX test/cpp_headers/xor.o 00:04:51.278 CXX test/cpp_headers/zipf.o 00:04:51.540 LINK app_repeat 00:04:51.540 LINK led 00:04:51.540 LINK spdk_nvme_perf 00:04:51.540 LINK vhost_fuzz 00:04:51.540 LINK spdk_nvme_identify 00:04:51.540 LINK hello_sock 00:04:51.540 LINK memory_ut 00:04:51.540 LINK spdk_top 00:04:51.540 LINK thread 00:04:51.799 LINK scheduler 00:04:51.799 CC test/nvme/err_injection/err_injection.o 00:04:51.799 CC test/nvme/e2edp/nvme_dp.o 00:04:51.799 CC test/nvme/reset/reset.o 00:04:51.799 CC test/nvme/boot_partition/boot_partition.o 00:04:51.799 CC test/nvme/reserve/reserve.o 00:04:51.799 CC test/nvme/connect_stress/connect_stress.o 00:04:51.799 CC test/nvme/sgl/sgl.o 00:04:51.799 CC test/nvme/aer/aer.o 00:04:51.799 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:51.799 CC test/nvme/overhead/overhead.o 00:04:51.799 CC test/nvme/cuse/cuse.o 00:04:51.799 CC test/nvme/startup/startup.o 00:04:51.799 CC test/nvme/simple_copy/simple_copy.o 00:04:51.799 CC test/nvme/fdp/fdp.o 00:04:51.799 CC test/nvme/compliance/nvme_compliance.o 00:04:51.799 CC test/nvme/fused_ordering/fused_ordering.o 00:04:51.799 LINK vhost 00:04:51.799 CC test/accel/dif/dif.o 00:04:51.799 LINK idxd_perf 00:04:51.799 CC test/blobfs/mkfs/mkfs.o 00:04:51.799 CC test/lvol/esnap/esnap.o 00:04:52.058 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:52.058 CC examples/nvme/hello_world/hello_world.o 00:04:52.058 CC examples/nvme/abort/abort.o 00:04:52.058 CC examples/nvme/arbitration/arbitration.o 00:04:52.058 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:52.058 CC examples/nvme/reconnect/reconnect.o 00:04:52.058 CC examples/nvme/hotplug/hotplug.o 00:04:52.058 LINK err_injection 00:04:52.058 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:52.058 LINK doorbell_aers 00:04:52.058 LINK fused_ordering 00:04:52.058 LINK reserve 00:04:52.058 LINK simple_copy 00:04:52.058 LINK startup 00:04:52.058 LINK boot_partition 00:04:52.058 CC examples/accel/perf/accel_perf.o 00:04:52.058 LINK connect_stress 00:04:52.058 LINK sgl 00:04:52.058 LINK reset 00:04:52.058 LINK mkfs 00:04:52.058 LINK nvme_compliance 00:04:52.058 CC examples/blob/hello_world/hello_blob.o 00:04:52.317 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:52.317 CC examples/blob/cli/blobcli.o 00:04:52.317 LINK pmr_persistence 00:04:52.317 LINK aer 00:04:52.317 LINK nvme_dp 00:04:52.317 LINK overhead 00:04:52.317 LINK hello_world 00:04:52.317 LINK hotplug 00:04:52.317 LINK cmb_copy 00:04:52.317 LINK fdp 00:04:52.317 LINK abort 00:04:52.317 LINK reconnect 00:04:52.574 LINK arbitration 00:04:52.574 LINK hello_fsdev 00:04:52.574 LINK dif 00:04:52.574 LINK hello_blob 00:04:52.574 LINK nvme_manage 00:04:52.832 LINK blobcli 00:04:52.832 LINK accel_perf 00:04:53.090 CC test/bdev/bdevio/bdevio.o 00:04:53.090 LINK iscsi_fuzz 00:04:53.090 CC examples/bdev/hello_world/hello_bdev.o 00:04:53.090 CC examples/bdev/bdevperf/bdevperf.o 00:04:53.348 LINK cuse 00:04:53.348 LINK bdevio 00:04:53.348 LINK hello_bdev 00:04:53.914 LINK bdevperf 00:04:54.479 CC examples/nvmf/nvmf/nvmf.o 00:04:54.737 LINK nvmf 00:04:57.265 LINK esnap 00:04:57.265 00:04:57.265 real 1m6.288s 00:04:57.265 user 9m3.914s 00:04:57.265 sys 1m57.500s 00:04:57.265 01:15:42 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:57.265 01:15:42 make -- common/autotest_common.sh@10 -- $ set +x 00:04:57.265 ************************************ 00:04:57.265 END TEST make 00:04:57.265 ************************************ 00:04:57.265 01:15:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:57.265 01:15:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:57.265 01:15:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:57.265 01:15:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.265 01:15:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:57.265 01:15:42 -- pm/common@44 -- $ pid=1369628 00:04:57.265 01:15:42 -- pm/common@50 -- $ kill -TERM 1369628 00:04:57.265 01:15:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.265 01:15:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:57.265 01:15:42 -- pm/common@44 -- $ pid=1369630 00:04:57.265 01:15:42 -- pm/common@50 -- $ kill -TERM 1369630 00:04:57.265 01:15:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.265 01:15:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:57.265 01:15:42 -- pm/common@44 -- $ pid=1369632 00:04:57.265 01:15:42 -- pm/common@50 -- $ kill -TERM 1369632 00:04:57.265 01:15:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.265 01:15:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:57.265 01:15:42 -- pm/common@44 -- $ pid=1369662 00:04:57.265 01:15:42 -- pm/common@50 -- $ sudo -E kill -TERM 1369662 00:04:57.524 01:15:42 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.524 01:15:42 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.524 01:15:42 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.524 01:15:42 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.524 01:15:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.524 01:15:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.524 01:15:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.524 01:15:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.524 01:15:42 -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.524 01:15:42 -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.524 01:15:42 -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.524 01:15:42 -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.524 01:15:42 -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.524 01:15:42 -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.524 01:15:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.524 01:15:42 -- scripts/common.sh@344 -- # case "$op" in 00:04:57.524 01:15:42 -- scripts/common.sh@345 -- # : 1 00:04:57.524 01:15:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.524 01:15:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.524 01:15:42 -- scripts/common.sh@365 -- # decimal 1 00:04:57.524 01:15:42 -- scripts/common.sh@353 -- # local d=1 00:04:57.524 01:15:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.524 01:15:42 -- scripts/common.sh@355 -- # echo 1 00:04:57.524 01:15:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.524 01:15:42 -- scripts/common.sh@366 -- # decimal 2 00:04:57.524 01:15:42 -- scripts/common.sh@353 -- # local d=2 00:04:57.524 01:15:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.524 01:15:42 -- scripts/common.sh@355 -- # echo 2 00:04:57.524 01:15:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.524 01:15:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.524 01:15:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.524 01:15:42 -- scripts/common.sh@368 -- # return 0 00:04:57.524 01:15:42 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.524 01:15:42 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.524 --rc genhtml_branch_coverage=1 00:04:57.524 --rc genhtml_function_coverage=1 00:04:57.524 --rc genhtml_legend=1 00:04:57.524 --rc geninfo_all_blocks=1 00:04:57.524 --rc geninfo_unexecuted_blocks=1 00:04:57.524 00:04:57.524 ' 00:04:57.524 01:15:42 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.524 --rc genhtml_branch_coverage=1 00:04:57.524 --rc genhtml_function_coverage=1 00:04:57.524 --rc genhtml_legend=1 00:04:57.524 --rc geninfo_all_blocks=1 00:04:57.524 --rc geninfo_unexecuted_blocks=1 00:04:57.524 00:04:57.524 ' 00:04:57.524 01:15:42 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.524 --rc genhtml_branch_coverage=1 00:04:57.524 --rc genhtml_function_coverage=1 00:04:57.524 --rc genhtml_legend=1 00:04:57.524 --rc geninfo_all_blocks=1 00:04:57.524 --rc geninfo_unexecuted_blocks=1 00:04:57.524 00:04:57.524 ' 00:04:57.524 01:15:42 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.524 --rc genhtml_branch_coverage=1 00:04:57.524 --rc genhtml_function_coverage=1 00:04:57.524 --rc genhtml_legend=1 00:04:57.524 --rc geninfo_all_blocks=1 00:04:57.524 --rc geninfo_unexecuted_blocks=1 00:04:57.524 00:04:57.524 ' 00:04:57.524 01:15:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.524 01:15:42 -- nvmf/common.sh@7 -- # uname -s 00:04:57.524 01:15:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.524 01:15:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.524 01:15:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.524 01:15:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.524 01:15:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.524 01:15:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.524 01:15:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.524 01:15:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.524 01:15:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.524 01:15:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.524 01:15:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:57.524 01:15:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:57.524 01:15:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.524 01:15:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.524 01:15:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:57.524 01:15:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.524 01:15:43 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:57.524 01:15:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.524 01:15:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.524 01:15:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.524 01:15:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.525 01:15:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.525 01:15:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.525 01:15:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.525 01:15:43 -- paths/export.sh@5 -- # export PATH 00:04:57.525 01:15:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.525 01:15:43 -- nvmf/common.sh@51 -- # : 0 00:04:57.525 01:15:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.525 01:15:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.525 01:15:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.525 01:15:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.525 01:15:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.525 01:15:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.525 01:15:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.525 01:15:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.525 01:15:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.525 01:15:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:57.525 01:15:43 -- spdk/autotest.sh@32 -- # uname -s 00:04:57.525 01:15:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:57.525 01:15:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:57.525 01:15:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:57.525 01:15:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:57.525 01:15:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:57.525 01:15:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:57.525 01:15:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:57.525 01:15:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:57.525 01:15:43 -- spdk/autotest.sh@48 -- # udevadm_pid=1449905 00:04:57.525 01:15:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:57.525 01:15:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:57.525 01:15:43 -- pm/common@17 -- # local monitor 00:04:57.525 01:15:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.525 01:15:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.525 01:15:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.525 01:15:43 -- pm/common@21 -- # date +%s 00:04:57.525 01:15:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:57.525 01:15:43 -- pm/common@21 -- # date +%s 00:04:57.525 01:15:43 -- pm/common@25 -- # sleep 1 00:04:57.525 01:15:43 -- pm/common@21 -- # date +%s 00:04:57.525 01:15:43 -- pm/common@21 -- # date +%s 00:04:57.525 01:15:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728774943 00:04:57.525 01:15:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728774943 00:04:57.525 01:15:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728774943 00:04:57.525 01:15:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728774943 00:04:57.525 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728774943_collect-cpu-load.pm.log 00:04:57.525 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728774943_collect-vmstat.pm.log 00:04:57.525 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728774943_collect-cpu-temp.pm.log 00:04:57.525 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728774943_collect-bmc-pm.bmc.pm.log 00:04:58.460 01:15:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:58.460 01:15:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:58.460 01:15:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.460 01:15:44 -- common/autotest_common.sh@10 -- # set +x 00:04:58.460 01:15:44 -- spdk/autotest.sh@59 -- # create_test_list 00:04:58.460 01:15:44 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:58.460 01:15:44 -- common/autotest_common.sh@10 -- # set +x 00:04:58.718 01:15:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:58.718 01:15:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.718 01:15:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.718 01:15:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:58.718 01:15:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.718 01:15:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:58.718 01:15:44 -- common/autotest_common.sh@1455 -- # uname 00:04:58.718 01:15:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:58.718 01:15:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:58.718 01:15:44 -- common/autotest_common.sh@1475 -- # uname 00:04:58.718 01:15:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:58.718 01:15:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:58.718 01:15:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:58.718 lcov: LCOV version 1.15 00:04:58.718 01:15:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:16.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:16.787 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:38.702 01:16:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:38.702 01:16:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:38.702 01:16:21 -- common/autotest_common.sh@10 -- # set +x 00:05:38.702 01:16:21 -- spdk/autotest.sh@78 -- # rm -f 00:05:38.702 01:16:21 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:38.702 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:05:38.702 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:05:38.702 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:05:38.702 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:05:38.702 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:05:38.702 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:05:38.702 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:05:38.702 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:05:38.702 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:05:38.702 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:05:38.702 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:05:38.702 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:05:38.702 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:05:38.702 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:05:38.702 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:05:38.702 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:05:38.702 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:05:38.702 01:16:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:38.702 01:16:22 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:38.702 01:16:22 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:38.702 01:16:22 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:38.702 01:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:38.702 01:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:38.702 01:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:38.702 01:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:38.702 01:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:38.702 01:16:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:38.702 01:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.702 01:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.702 01:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:38.702 01:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:38.702 01:16:22 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:38.702 No valid GPT data, bailing 00:05:38.702 01:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:38.702 01:16:22 -- scripts/common.sh@394 -- # pt= 00:05:38.702 01:16:22 -- scripts/common.sh@395 -- # return 1 00:05:38.702 01:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:38.702 1+0 records in 00:05:38.702 1+0 records out 00:05:38.702 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00218624 s, 480 MB/s 00:05:38.702 01:16:22 -- spdk/autotest.sh@105 -- # sync 00:05:38.702 01:16:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:38.702 01:16:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:38.702 01:16:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:39.269 01:16:24 -- spdk/autotest.sh@111 -- # uname -s 00:05:39.269 01:16:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:39.269 01:16:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:39.269 01:16:24 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:40.203 Hugepages 00:05:40.203 node hugesize free / total 00:05:40.203 node0 1048576kB 0 / 0 00:05:40.204 node0 2048kB 0 / 0 00:05:40.204 node1 1048576kB 0 / 0 00:05:40.204 node1 2048kB 0 / 0 00:05:40.204 00:05:40.204 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:40.204 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:40.204 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:40.204 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:40.204 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:40.204 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:40.204 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:40.204 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:40.204 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:40.204 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:40.204 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:40.204 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:40.204 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:40.204 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:40.204 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:40.204 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:40.204 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:40.461 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:40.461 01:16:25 -- spdk/autotest.sh@117 -- # uname -s 00:05:40.461 01:16:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:40.461 01:16:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:40.461 01:16:25 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:41.835 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:41.835 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:41.835 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:41.835 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:41.835 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:41.835 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:41.835 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:41.835 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:41.835 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:41.835 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:41.835 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:41.835 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:41.835 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:41.835 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:41.835 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:41.835 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:42.770 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:42.770 01:16:28 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:43.746 01:16:29 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:43.747 01:16:29 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:43.747 01:16:29 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:43.747 01:16:29 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:43.747 01:16:29 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:43.747 01:16:29 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:43.747 01:16:29 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.747 01:16:29 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:43.747 01:16:29 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:43.747 01:16:29 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:43.747 01:16:29 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:43.747 01:16:29 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:45.119 Waiting for block devices as requested 00:05:45.119 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:45.119 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:45.119 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:45.119 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:45.378 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:45.378 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:45.378 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:45.378 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:45.636 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:45.636 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:45.636 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:45.636 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:45.894 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:45.894 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:45.894 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:45.894 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:46.153 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:46.153 01:16:31 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:46.153 01:16:31 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:46.153 01:16:31 -- common/autotest_common.sh@1485 -- # grep 0000:88:00.0/nvme/nvme 00:05:46.153 01:16:31 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:05:46.153 01:16:31 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:46.153 01:16:31 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:46.153 01:16:31 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:46.153 01:16:31 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:46.153 01:16:31 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:46.153 01:16:31 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:46.153 01:16:31 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:46.153 01:16:31 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:46.153 01:16:31 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:46.153 01:16:31 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:05:46.153 01:16:31 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:46.153 01:16:31 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:46.153 01:16:31 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:46.153 01:16:31 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:46.153 01:16:31 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:46.153 01:16:31 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:46.153 01:16:31 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:46.153 01:16:31 -- common/autotest_common.sh@1541 -- # continue 00:05:46.153 01:16:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:46.153 01:16:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.153 01:16:31 -- common/autotest_common.sh@10 -- # set +x 00:05:46.153 01:16:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:46.153 01:16:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.153 01:16:31 -- common/autotest_common.sh@10 -- # set +x 00:05:46.153 01:16:31 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:47.602 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:47.602 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:47.602 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:47.602 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:47.602 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:47.602 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:47.602 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:47.602 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:47.602 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:47.602 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:47.602 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:47.602 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:47.602 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:47.602 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:47.602 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:47.602 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:48.537 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:48.537 01:16:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:48.537 01:16:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.537 01:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:48.537 01:16:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:48.537 01:16:34 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:48.537 01:16:34 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:48.537 01:16:34 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:48.537 01:16:34 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:48.537 01:16:34 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:48.537 01:16:34 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:48.537 01:16:34 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:48.537 01:16:34 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:48.537 01:16:34 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:48.537 01:16:34 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:48.537 01:16:34 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:48.537 01:16:34 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:48.796 01:16:34 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:05:48.796 01:16:34 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:05:48.796 01:16:34 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:48.796 01:16:34 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:48.796 01:16:34 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:05:48.796 01:16:34 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:48.796 01:16:34 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:05:48.796 01:16:34 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:05:48.796 01:16:34 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:48.796 01:16:34 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:48.796 01:16:34 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1460541 00:05:48.796 01:16:34 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.796 01:16:34 -- common/autotest_common.sh@1583 -- # waitforlisten 1460541 00:05:48.796 01:16:34 -- common/autotest_common.sh@831 -- # '[' -z 1460541 ']' 00:05:48.796 01:16:34 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.796 01:16:34 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.796 01:16:34 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.796 01:16:34 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.796 01:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:48.796 [2024-10-13 01:16:34.232720] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:05:48.796 [2024-10-13 01:16:34.232832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460541 ] 00:05:48.796 [2024-10-13 01:16:34.294072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.796 [2024-10-13 01:16:34.343941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.054 01:16:34 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.054 01:16:34 -- common/autotest_common.sh@864 -- # return 0 00:05:49.054 01:16:34 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:49.054 01:16:34 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:49.054 01:16:34 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:52.338 nvme0n1 00:05:52.338 01:16:37 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:52.599 [2024-10-13 01:16:37.956806] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:52.599 [2024-10-13 01:16:37.956853] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:52.599 request: 00:05:52.599 { 00:05:52.599 "nvme_ctrlr_name": "nvme0", 00:05:52.599 "password": "test", 00:05:52.599 "method": "bdev_nvme_opal_revert", 00:05:52.599 "req_id": 1 00:05:52.599 } 00:05:52.599 Got JSON-RPC error response 00:05:52.599 response: 00:05:52.599 { 00:05:52.599 "code": -32603, 00:05:52.599 "message": "Internal error" 00:05:52.599 } 00:05:52.599 01:16:37 -- common/autotest_common.sh@1589 -- # true 00:05:52.599 01:16:37 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:52.599 01:16:37 -- common/autotest_common.sh@1593 -- # killprocess 1460541 00:05:52.599 01:16:37 -- common/autotest_common.sh@950 -- # '[' -z 1460541 ']' 00:05:52.599 01:16:37 -- common/autotest_common.sh@954 -- # kill -0 1460541 00:05:52.599 01:16:37 -- common/autotest_common.sh@955 -- # uname 00:05:52.599 01:16:37 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.599 01:16:37 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1460541 00:05:52.599 01:16:38 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.599 01:16:38 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.599 01:16:38 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1460541' 00:05:52.599 killing process with pid 1460541 00:05:52.599 01:16:38 -- common/autotest_common.sh@969 -- # kill 1460541 00:05:52.599 01:16:38 -- common/autotest_common.sh@974 -- # wait 1460541 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.599 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:52.600 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:54.498 01:16:39 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:54.498 01:16:39 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:54.498 01:16:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:54.498 01:16:39 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:54.498 01:16:39 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:54.498 01:16:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:54.498 01:16:39 -- common/autotest_common.sh@10 -- # set +x 00:05:54.498 01:16:39 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:54.498 01:16:39 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:54.498 01:16:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.498 01:16:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.498 01:16:39 -- common/autotest_common.sh@10 -- # set +x 00:05:54.498 ************************************ 00:05:54.498 START TEST env 00:05:54.498 ************************************ 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:54.498 * Looking for test storage... 00:05:54.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.498 01:16:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.498 01:16:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.498 01:16:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.498 01:16:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.498 01:16:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.498 01:16:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.498 01:16:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.498 01:16:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.498 01:16:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.498 01:16:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.498 01:16:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.498 01:16:39 env -- scripts/common.sh@344 -- # case "$op" in 00:05:54.498 01:16:39 env -- scripts/common.sh@345 -- # : 1 00:05:54.498 01:16:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.498 01:16:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.498 01:16:39 env -- scripts/common.sh@365 -- # decimal 1 00:05:54.498 01:16:39 env -- scripts/common.sh@353 -- # local d=1 00:05:54.498 01:16:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.498 01:16:39 env -- scripts/common.sh@355 -- # echo 1 00:05:54.498 01:16:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.498 01:16:39 env -- scripts/common.sh@366 -- # decimal 2 00:05:54.498 01:16:39 env -- scripts/common.sh@353 -- # local d=2 00:05:54.498 01:16:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.498 01:16:39 env -- scripts/common.sh@355 -- # echo 2 00:05:54.498 01:16:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.498 01:16:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.498 01:16:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.498 01:16:39 env -- scripts/common.sh@368 -- # return 0 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.498 --rc genhtml_branch_coverage=1 00:05:54.498 --rc genhtml_function_coverage=1 00:05:54.498 --rc genhtml_legend=1 00:05:54.498 --rc geninfo_all_blocks=1 00:05:54.498 --rc geninfo_unexecuted_blocks=1 00:05:54.498 00:05:54.498 ' 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.498 --rc genhtml_branch_coverage=1 00:05:54.498 --rc genhtml_function_coverage=1 00:05:54.498 --rc genhtml_legend=1 00:05:54.498 --rc geninfo_all_blocks=1 00:05:54.498 --rc geninfo_unexecuted_blocks=1 00:05:54.498 00:05:54.498 ' 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.498 --rc genhtml_branch_coverage=1 00:05:54.498 --rc genhtml_function_coverage=1 00:05:54.498 --rc genhtml_legend=1 00:05:54.498 --rc geninfo_all_blocks=1 00:05:54.498 --rc geninfo_unexecuted_blocks=1 00:05:54.498 00:05:54.498 ' 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.498 --rc genhtml_branch_coverage=1 00:05:54.498 --rc genhtml_function_coverage=1 00:05:54.498 --rc genhtml_legend=1 00:05:54.498 --rc geninfo_all_blocks=1 00:05:54.498 --rc geninfo_unexecuted_blocks=1 00:05:54.498 00:05:54.498 ' 00:05:54.498 01:16:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.498 01:16:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.498 01:16:39 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.498 ************************************ 00:05:54.498 START TEST env_memory 00:05:54.498 ************************************ 00:05:54.498 01:16:39 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:54.498 00:05:54.498 00:05:54.498 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.498 http://cunit.sourceforge.net/ 00:05:54.498 00:05:54.498 00:05:54.498 Suite: memory 00:05:54.498 Test: alloc and free memory map ...[2024-10-13 01:16:40.012676] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:54.498 passed 00:05:54.499 Test: mem map translation ...[2024-10-13 01:16:40.034712] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:54.499 [2024-10-13 01:16:40.034736] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:54.499 [2024-10-13 01:16:40.034796] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:54.499 [2024-10-13 01:16:40.034809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:54.499 passed 00:05:54.757 Test: mem map registration ...[2024-10-13 01:16:40.081608] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:54.757 [2024-10-13 01:16:40.081636] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:54.758 passed 00:05:54.758 Test: mem map adjacent registrations ...passed 00:05:54.758 00:05:54.758 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.758 suites 1 1 n/a 0 0 00:05:54.758 tests 4 4 4 0 0 00:05:54.758 asserts 152 152 152 0 n/a 00:05:54.758 00:05:54.758 Elapsed time = 0.155 seconds 00:05:54.758 00:05:54.758 real 0m0.164s 00:05:54.758 user 0m0.147s 00:05:54.758 sys 0m0.016s 00:05:54.758 01:16:40 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.758 01:16:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:54.758 ************************************ 00:05:54.758 END TEST env_memory 00:05:54.758 ************************************ 00:05:54.758 01:16:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:54.758 01:16:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.758 01:16:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.758 01:16:40 env -- common/autotest_common.sh@10 -- # set +x 00:05:54.758 ************************************ 00:05:54.758 START TEST env_vtophys 00:05:54.758 ************************************ 00:05:54.758 01:16:40 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:54.758 EAL: lib.eal log level changed from notice to debug 00:05:54.758 EAL: Detected lcore 0 as core 0 on socket 0 00:05:54.758 EAL: Detected lcore 1 as core 1 on socket 0 00:05:54.758 EAL: Detected lcore 2 as core 2 on socket 0 00:05:54.758 EAL: Detected lcore 3 as core 3 on socket 0 00:05:54.758 EAL: Detected lcore 4 as core 4 on socket 0 00:05:54.758 EAL: Detected lcore 5 as core 5 on socket 0 00:05:54.758 EAL: Detected lcore 6 as core 8 on socket 0 00:05:54.758 EAL: Detected lcore 7 as core 9 on socket 0 00:05:54.758 EAL: Detected lcore 8 as core 10 on socket 0 00:05:54.758 EAL: Detected lcore 9 as core 11 on socket 0 00:05:54.758 EAL: Detected lcore 10 as core 12 on socket 0 00:05:54.758 EAL: Detected lcore 11 as core 13 on socket 0 00:05:54.758 EAL: Detected lcore 12 as core 0 on socket 1 00:05:54.758 EAL: Detected lcore 13 as core 1 on socket 1 00:05:54.758 EAL: Detected lcore 14 as core 2 on socket 1 00:05:54.758 EAL: Detected lcore 15 as core 3 on socket 1 00:05:54.758 EAL: Detected lcore 16 as core 4 on socket 1 00:05:54.758 EAL: Detected lcore 17 as core 5 on socket 1 00:05:54.758 EAL: Detected lcore 18 as core 8 on socket 1 00:05:54.758 EAL: Detected lcore 19 as core 9 on socket 1 00:05:54.758 EAL: Detected lcore 20 as core 10 on socket 1 00:05:54.758 EAL: Detected lcore 21 as core 11 on socket 1 00:05:54.758 EAL: Detected lcore 22 as core 12 on socket 1 00:05:54.758 EAL: Detected lcore 23 as core 13 on socket 1 00:05:54.758 EAL: Detected lcore 24 as core 0 on socket 0 00:05:54.758 EAL: Detected lcore 25 as core 1 on socket 0 00:05:54.758 EAL: Detected lcore 26 as core 2 on socket 0 00:05:54.758 EAL: Detected lcore 27 as core 3 on socket 0 00:05:54.758 EAL: Detected lcore 28 as core 4 on socket 0 00:05:54.758 EAL: Detected lcore 29 as core 5 on socket 0 00:05:54.758 EAL: Detected lcore 30 as core 8 on socket 0 00:05:54.758 EAL: Detected lcore 31 as core 9 on socket 0 00:05:54.758 EAL: Detected lcore 32 as core 10 on socket 0 00:05:54.758 EAL: Detected lcore 33 as core 11 on socket 0 00:05:54.758 EAL: Detected lcore 34 as core 12 on socket 0 00:05:54.758 EAL: Detected lcore 35 as core 13 on socket 0 00:05:54.758 EAL: Detected lcore 36 as core 0 on socket 1 00:05:54.758 EAL: Detected lcore 37 as core 1 on socket 1 00:05:54.758 EAL: Detected lcore 38 as core 2 on socket 1 00:05:54.758 EAL: Detected lcore 39 as core 3 on socket 1 00:05:54.758 EAL: Detected lcore 40 as core 4 on socket 1 00:05:54.758 EAL: Detected lcore 41 as core 5 on socket 1 00:05:54.758 EAL: Detected lcore 42 as core 8 on socket 1 00:05:54.758 EAL: Detected lcore 43 as core 9 on socket 1 00:05:54.758 EAL: Detected lcore 44 as core 10 on socket 1 00:05:54.758 EAL: Detected lcore 45 as core 11 on socket 1 00:05:54.758 EAL: Detected lcore 46 as core 12 on socket 1 00:05:54.758 EAL: Detected lcore 47 as core 13 on socket 1 00:05:54.758 EAL: Maximum logical cores by configuration: 128 00:05:54.758 EAL: Detected CPU lcores: 48 00:05:54.758 EAL: Detected NUMA nodes: 2 00:05:54.758 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:54.758 EAL: Detected shared linkage of DPDK 00:05:54.758 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:54.758 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:54.758 EAL: Registered [vdev] bus. 00:05:54.758 EAL: bus.vdev log level changed from disabled to notice 00:05:54.758 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:54.758 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:54.758 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:54.758 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:54.758 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:54.758 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:54.758 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:54.758 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:54.758 EAL: No shared files mode enabled, IPC will be disabled 00:05:54.758 EAL: No shared files mode enabled, IPC is disabled 00:05:54.758 EAL: Bus pci wants IOVA as 'DC' 00:05:54.758 EAL: Bus vdev wants IOVA as 'DC' 00:05:54.758 EAL: Buses did not request a specific IOVA mode. 00:05:54.758 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:54.758 EAL: Selected IOVA mode 'VA' 00:05:54.758 EAL: Probing VFIO support... 00:05:54.758 EAL: IOMMU type 1 (Type 1) is supported 00:05:54.758 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:54.758 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:54.758 EAL: VFIO support initialized 00:05:54.758 EAL: Ask a virtual area of 0x2e000 bytes 00:05:54.758 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:54.758 EAL: Setting up physically contiguous memory... 00:05:54.758 EAL: Setting maximum number of open files to 524288 00:05:54.758 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:54.758 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:54.758 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:54.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.758 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:54.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.758 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:54.758 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:54.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.758 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:54.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.758 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:54.758 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:54.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.758 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:54.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.758 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:54.758 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:54.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.758 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:54.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.758 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:54.758 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:54.758 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:54.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.758 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:54.758 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.758 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:54.758 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:54.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.758 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:54.758 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.758 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:54.758 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:54.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.758 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:54.758 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.758 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:54.758 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:54.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.758 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:54.758 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:54.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.758 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:54.758 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:54.758 EAL: Hugepages will be freed exactly as allocated. 00:05:54.758 EAL: No shared files mode enabled, IPC is disabled 00:05:54.758 EAL: No shared files mode enabled, IPC is disabled 00:05:54.758 EAL: TSC frequency is ~2700000 KHz 00:05:54.758 EAL: Main lcore 0 is ready (tid=7ff42d7f1a00;cpuset=[0]) 00:05:54.758 EAL: Trying to obtain current memory policy. 00:05:54.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.758 EAL: Restoring previous memory policy: 0 00:05:54.758 EAL: request: mp_malloc_sync 00:05:54.758 EAL: No shared files mode enabled, IPC is disabled 00:05:54.758 EAL: Heap on socket 0 was expanded by 2MB 00:05:54.758 EAL: No shared files mode enabled, IPC is disabled 00:05:54.758 EAL: No shared files mode enabled, IPC is disabled 00:05:54.758 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:54.758 EAL: Mem event callback 'spdk:(nil)' registered 00:05:54.758 00:05:54.758 00:05:54.758 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.758 http://cunit.sourceforge.net/ 00:05:54.758 00:05:54.758 00:05:54.758 Suite: components_suite 00:05:54.758 Test: vtophys_malloc_test ...passed 00:05:54.759 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:54.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.759 EAL: Restoring previous memory policy: 4 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was expanded by 4MB 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was shrunk by 4MB 00:05:54.759 EAL: Trying to obtain current memory policy. 00:05:54.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.759 EAL: Restoring previous memory policy: 4 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was expanded by 6MB 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was shrunk by 6MB 00:05:54.759 EAL: Trying to obtain current memory policy. 00:05:54.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.759 EAL: Restoring previous memory policy: 4 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was expanded by 10MB 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was shrunk by 10MB 00:05:54.759 EAL: Trying to obtain current memory policy. 00:05:54.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.759 EAL: Restoring previous memory policy: 4 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was expanded by 18MB 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was shrunk by 18MB 00:05:54.759 EAL: Trying to obtain current memory policy. 00:05:54.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.759 EAL: Restoring previous memory policy: 4 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was expanded by 34MB 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was shrunk by 34MB 00:05:54.759 EAL: Trying to obtain current memory policy. 00:05:54.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.759 EAL: Restoring previous memory policy: 4 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was expanded by 66MB 00:05:54.759 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.759 EAL: request: mp_malloc_sync 00:05:54.759 EAL: No shared files mode enabled, IPC is disabled 00:05:54.759 EAL: Heap on socket 0 was shrunk by 66MB 00:05:54.759 EAL: Trying to obtain current memory policy. 00:05:54.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.016 EAL: Restoring previous memory policy: 4 00:05:55.016 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.016 EAL: request: mp_malloc_sync 00:05:55.016 EAL: No shared files mode enabled, IPC is disabled 00:05:55.016 EAL: Heap on socket 0 was expanded by 130MB 00:05:55.016 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.016 EAL: request: mp_malloc_sync 00:05:55.016 EAL: No shared files mode enabled, IPC is disabled 00:05:55.016 EAL: Heap on socket 0 was shrunk by 130MB 00:05:55.016 EAL: Trying to obtain current memory policy. 00:05:55.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.016 EAL: Restoring previous memory policy: 4 00:05:55.016 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.016 EAL: request: mp_malloc_sync 00:05:55.016 EAL: No shared files mode enabled, IPC is disabled 00:05:55.016 EAL: Heap on socket 0 was expanded by 258MB 00:05:55.016 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.274 EAL: request: mp_malloc_sync 00:05:55.274 EAL: No shared files mode enabled, IPC is disabled 00:05:55.274 EAL: Heap on socket 0 was shrunk by 258MB 00:05:55.274 EAL: Trying to obtain current memory policy. 00:05:55.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.274 EAL: Restoring previous memory policy: 4 00:05:55.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.274 EAL: request: mp_malloc_sync 00:05:55.274 EAL: No shared files mode enabled, IPC is disabled 00:05:55.274 EAL: Heap on socket 0 was expanded by 514MB 00:05:55.274 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.531 EAL: request: mp_malloc_sync 00:05:55.531 EAL: No shared files mode enabled, IPC is disabled 00:05:55.531 EAL: Heap on socket 0 was shrunk by 514MB 00:05:55.531 EAL: Trying to obtain current memory policy. 00:05:55.531 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.789 EAL: Restoring previous memory policy: 4 00:05:55.789 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.789 EAL: request: mp_malloc_sync 00:05:55.789 EAL: No shared files mode enabled, IPC is disabled 00:05:55.789 EAL: Heap on socket 0 was expanded by 1026MB 00:05:56.046 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.304 EAL: request: mp_malloc_sync 00:05:56.304 EAL: No shared files mode enabled, IPC is disabled 00:05:56.304 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:56.304 passed 00:05:56.304 00:05:56.304 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.304 suites 1 1 n/a 0 0 00:05:56.304 tests 2 2 2 0 0 00:05:56.304 asserts 497 497 497 0 n/a 00:05:56.304 00:05:56.304 Elapsed time = 1.397 seconds 00:05:56.304 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.304 EAL: request: mp_malloc_sync 00:05:56.304 EAL: No shared files mode enabled, IPC is disabled 00:05:56.304 EAL: Heap on socket 0 was shrunk by 2MB 00:05:56.304 EAL: No shared files mode enabled, IPC is disabled 00:05:56.304 EAL: No shared files mode enabled, IPC is disabled 00:05:56.304 EAL: No shared files mode enabled, IPC is disabled 00:05:56.304 00:05:56.304 real 0m1.520s 00:05:56.304 user 0m0.868s 00:05:56.304 sys 0m0.614s 00:05:56.304 01:16:41 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.304 01:16:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:56.304 ************************************ 00:05:56.304 END TEST env_vtophys 00:05:56.304 ************************************ 00:05:56.304 01:16:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:56.304 01:16:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.304 01:16:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.304 01:16:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.304 ************************************ 00:05:56.304 START TEST env_pci 00:05:56.304 ************************************ 00:05:56.304 01:16:41 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:56.304 00:05:56.304 00:05:56.304 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.304 http://cunit.sourceforge.net/ 00:05:56.304 00:05:56.304 00:05:56.304 Suite: pci 00:05:56.304 Test: pci_hook ...[2024-10-13 01:16:41.760052] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1461442 has claimed it 00:05:56.304 EAL: Cannot find device (10000:00:01.0) 00:05:56.304 EAL: Failed to attach device on primary process 00:05:56.304 passed 00:05:56.304 00:05:56.304 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.304 suites 1 1 n/a 0 0 00:05:56.304 tests 1 1 1 0 0 00:05:56.304 asserts 25 25 25 0 n/a 00:05:56.304 00:05:56.304 Elapsed time = 0.021 seconds 00:05:56.304 00:05:56.304 real 0m0.033s 00:05:56.304 user 0m0.011s 00:05:56.304 sys 0m0.022s 00:05:56.304 01:16:41 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.304 01:16:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:56.304 ************************************ 00:05:56.304 END TEST env_pci 00:05:56.304 ************************************ 00:05:56.304 01:16:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:56.304 01:16:41 env -- env/env.sh@15 -- # uname 00:05:56.304 01:16:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:56.304 01:16:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:56.304 01:16:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.304 01:16:41 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:56.304 01:16:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.304 01:16:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.304 ************************************ 00:05:56.305 START TEST env_dpdk_post_init 00:05:56.305 ************************************ 00:05:56.305 01:16:41 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.305 EAL: Detected CPU lcores: 48 00:05:56.305 EAL: Detected NUMA nodes: 2 00:05:56.305 EAL: Detected shared linkage of DPDK 00:05:56.305 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.305 EAL: Selected IOVA mode 'VA' 00:05:56.305 EAL: VFIO support initialized 00:05:56.305 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.564 EAL: Using IOMMU type 1 (Type 1) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:56.564 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:57.500 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:00.779 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:00.779 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:00.779 Starting DPDK initialization... 00:06:00.779 Starting SPDK post initialization... 00:06:00.779 SPDK NVMe probe 00:06:00.779 Attaching to 0000:88:00.0 00:06:00.779 Attached to 0000:88:00.0 00:06:00.779 Cleaning up... 00:06:00.779 00:06:00.779 real 0m4.399s 00:06:00.779 user 0m3.296s 00:06:00.779 sys 0m0.158s 00:06:00.779 01:16:46 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.779 01:16:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.779 ************************************ 00:06:00.779 END TEST env_dpdk_post_init 00:06:00.779 ************************************ 00:06:00.779 01:16:46 env -- env/env.sh@26 -- # uname 00:06:00.779 01:16:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:00.779 01:16:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:00.779 01:16:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.779 01:16:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.779 01:16:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.779 ************************************ 00:06:00.779 START TEST env_mem_callbacks 00:06:00.779 ************************************ 00:06:00.779 01:16:46 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:00.779 EAL: Detected CPU lcores: 48 00:06:00.779 EAL: Detected NUMA nodes: 2 00:06:00.779 EAL: Detected shared linkage of DPDK 00:06:00.779 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:00.779 EAL: Selected IOVA mode 'VA' 00:06:00.779 EAL: VFIO support initialized 00:06:00.779 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:00.779 00:06:00.779 00:06:00.779 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.779 http://cunit.sourceforge.net/ 00:06:00.779 00:06:00.779 00:06:00.779 Suite: memory 00:06:00.779 Test: test ... 00:06:00.779 register 0x200000200000 2097152 00:06:00.779 malloc 3145728 00:06:00.779 register 0x200000400000 4194304 00:06:00.779 buf 0x200000500000 len 3145728 PASSED 00:06:00.779 malloc 64 00:06:00.779 buf 0x2000004fff40 len 64 PASSED 00:06:00.779 malloc 4194304 00:06:00.779 register 0x200000800000 6291456 00:06:00.779 buf 0x200000a00000 len 4194304 PASSED 00:06:00.779 free 0x200000500000 3145728 00:06:00.779 free 0x2000004fff40 64 00:06:00.779 unregister 0x200000400000 4194304 PASSED 00:06:00.779 free 0x200000a00000 4194304 00:06:00.779 unregister 0x200000800000 6291456 PASSED 00:06:00.779 malloc 8388608 00:06:00.779 register 0x200000400000 10485760 00:06:00.779 buf 0x200000600000 len 8388608 PASSED 00:06:00.779 free 0x200000600000 8388608 00:06:00.779 unregister 0x200000400000 10485760 PASSED 00:06:00.779 passed 00:06:00.779 00:06:00.779 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.779 suites 1 1 n/a 0 0 00:06:00.779 tests 1 1 1 0 0 00:06:00.779 asserts 15 15 15 0 n/a 00:06:00.779 00:06:00.779 Elapsed time = 0.005 seconds 00:06:00.779 00:06:00.779 real 0m0.048s 00:06:00.779 user 0m0.016s 00:06:00.779 sys 0m0.032s 00:06:00.779 01:16:46 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.780 01:16:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:00.780 ************************************ 00:06:00.780 END TEST env_mem_callbacks 00:06:00.780 ************************************ 00:06:00.780 00:06:00.780 real 0m6.526s 00:06:00.780 user 0m4.520s 00:06:00.780 sys 0m1.042s 00:06:00.780 01:16:46 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.780 01:16:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.780 ************************************ 00:06:00.780 END TEST env 00:06:00.780 ************************************ 00:06:01.038 01:16:46 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:01.038 01:16:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.038 01:16:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.038 01:16:46 -- common/autotest_common.sh@10 -- # set +x 00:06:01.038 ************************************ 00:06:01.038 START TEST rpc 00:06:01.038 ************************************ 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:01.038 * Looking for test storage... 00:06:01.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:01.038 01:16:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.038 01:16:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.038 01:16:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.038 01:16:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.038 01:16:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.038 01:16:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.038 01:16:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.038 01:16:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.038 01:16:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.038 01:16:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.038 01:16:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.038 01:16:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:01.038 01:16:46 rpc -- scripts/common.sh@345 -- # : 1 00:06:01.038 01:16:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.038 01:16:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.038 01:16:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:01.038 01:16:46 rpc -- scripts/common.sh@353 -- # local d=1 00:06:01.038 01:16:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.038 01:16:46 rpc -- scripts/common.sh@355 -- # echo 1 00:06:01.038 01:16:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.038 01:16:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:01.038 01:16:46 rpc -- scripts/common.sh@353 -- # local d=2 00:06:01.038 01:16:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.038 01:16:46 rpc -- scripts/common.sh@355 -- # echo 2 00:06:01.038 01:16:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.038 01:16:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.038 01:16:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.038 01:16:46 rpc -- scripts/common.sh@368 -- # return 0 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:01.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.038 --rc genhtml_branch_coverage=1 00:06:01.038 --rc genhtml_function_coverage=1 00:06:01.038 --rc genhtml_legend=1 00:06:01.038 --rc geninfo_all_blocks=1 00:06:01.038 --rc geninfo_unexecuted_blocks=1 00:06:01.038 00:06:01.038 ' 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:01.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.038 --rc genhtml_branch_coverage=1 00:06:01.038 --rc genhtml_function_coverage=1 00:06:01.038 --rc genhtml_legend=1 00:06:01.038 --rc geninfo_all_blocks=1 00:06:01.038 --rc geninfo_unexecuted_blocks=1 00:06:01.038 00:06:01.038 ' 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:01.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.038 --rc genhtml_branch_coverage=1 00:06:01.038 --rc genhtml_function_coverage=1 00:06:01.038 --rc genhtml_legend=1 00:06:01.038 --rc geninfo_all_blocks=1 00:06:01.038 --rc geninfo_unexecuted_blocks=1 00:06:01.038 00:06:01.038 ' 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:01.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.038 --rc genhtml_branch_coverage=1 00:06:01.038 --rc genhtml_function_coverage=1 00:06:01.038 --rc genhtml_legend=1 00:06:01.038 --rc geninfo_all_blocks=1 00:06:01.038 --rc geninfo_unexecuted_blocks=1 00:06:01.038 00:06:01.038 ' 00:06:01.038 01:16:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1462228 00:06:01.038 01:16:46 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:01.038 01:16:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.038 01:16:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1462228 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@831 -- # '[' -z 1462228 ']' 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.038 01:16:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.038 [2024-10-13 01:16:46.580981] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:01.038 [2024-10-13 01:16:46.581064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462228 ] 00:06:01.296 [2024-10-13 01:16:46.638642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.296 [2024-10-13 01:16:46.683153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:01.296 [2024-10-13 01:16:46.683213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1462228' to capture a snapshot of events at runtime. 00:06:01.296 [2024-10-13 01:16:46.683236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.296 [2024-10-13 01:16:46.683247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.296 [2024-10-13 01:16:46.683256] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1462228 for offline analysis/debug. 00:06:01.297 [2024-10-13 01:16:46.683808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.555 01:16:46 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:01.555 01:16:46 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:01.555 01:16:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:01.555 01:16:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:01.555 01:16:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:01.555 01:16:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:01.555 01:16:46 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.555 01:16:46 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.555 01:16:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.555 ************************************ 00:06:01.555 START TEST rpc_integrity 00:06:01.555 ************************************ 00:06:01.555 01:16:46 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:01.555 01:16:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:01.555 01:16:46 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.555 01:16:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.555 01:16:46 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.555 01:16:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:01.555 01:16:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.555 { 00:06:01.555 "name": "Malloc0", 00:06:01.555 "aliases": [ 00:06:01.555 "0bc2d56d-1a6b-4ed6-b302-8992ab6145ef" 00:06:01.555 ], 00:06:01.555 "product_name": "Malloc disk", 00:06:01.555 "block_size": 512, 00:06:01.555 "num_blocks": 16384, 00:06:01.555 "uuid": "0bc2d56d-1a6b-4ed6-b302-8992ab6145ef", 00:06:01.555 "assigned_rate_limits": { 00:06:01.555 "rw_ios_per_sec": 0, 00:06:01.555 "rw_mbytes_per_sec": 0, 00:06:01.555 "r_mbytes_per_sec": 0, 00:06:01.555 "w_mbytes_per_sec": 0 00:06:01.555 }, 00:06:01.555 "claimed": false, 00:06:01.555 "zoned": false, 00:06:01.555 "supported_io_types": { 00:06:01.555 "read": true, 00:06:01.555 "write": true, 00:06:01.555 "unmap": true, 00:06:01.555 "flush": true, 00:06:01.555 "reset": true, 00:06:01.555 "nvme_admin": false, 00:06:01.555 "nvme_io": false, 00:06:01.555 "nvme_io_md": false, 00:06:01.555 "write_zeroes": true, 00:06:01.555 "zcopy": true, 00:06:01.555 "get_zone_info": false, 00:06:01.555 "zone_management": false, 00:06:01.555 "zone_append": false, 00:06:01.555 "compare": false, 00:06:01.555 "compare_and_write": false, 00:06:01.555 "abort": true, 00:06:01.555 "seek_hole": false, 00:06:01.555 "seek_data": false, 00:06:01.555 "copy": true, 00:06:01.555 "nvme_iov_md": false 00:06:01.555 }, 00:06:01.555 "memory_domains": [ 00:06:01.555 { 00:06:01.555 "dma_device_id": "system", 00:06:01.555 "dma_device_type": 1 00:06:01.555 }, 00:06:01.555 { 00:06:01.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.555 "dma_device_type": 2 00:06:01.555 } 00:06:01.555 ], 00:06:01.555 "driver_specific": {} 00:06:01.555 } 00:06:01.555 ]' 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.555 [2024-10-13 01:16:47.075030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:01.555 [2024-10-13 01:16:47.075074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.555 [2024-10-13 01:16:47.075100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1796990 00:06:01.555 [2024-10-13 01:16:47.075116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.555 [2024-10-13 01:16:47.076665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.555 [2024-10-13 01:16:47.076691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.555 Passthru0 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.555 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.555 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.555 { 00:06:01.555 "name": "Malloc0", 00:06:01.555 "aliases": [ 00:06:01.555 "0bc2d56d-1a6b-4ed6-b302-8992ab6145ef" 00:06:01.555 ], 00:06:01.555 "product_name": "Malloc disk", 00:06:01.555 "block_size": 512, 00:06:01.555 "num_blocks": 16384, 00:06:01.555 "uuid": "0bc2d56d-1a6b-4ed6-b302-8992ab6145ef", 00:06:01.555 "assigned_rate_limits": { 00:06:01.555 "rw_ios_per_sec": 0, 00:06:01.555 "rw_mbytes_per_sec": 0, 00:06:01.555 "r_mbytes_per_sec": 0, 00:06:01.555 "w_mbytes_per_sec": 0 00:06:01.555 }, 00:06:01.555 "claimed": true, 00:06:01.555 "claim_type": "exclusive_write", 00:06:01.555 "zoned": false, 00:06:01.555 "supported_io_types": { 00:06:01.555 "read": true, 00:06:01.555 "write": true, 00:06:01.555 "unmap": true, 00:06:01.555 "flush": true, 00:06:01.555 "reset": true, 00:06:01.555 "nvme_admin": false, 00:06:01.555 "nvme_io": false, 00:06:01.555 "nvme_io_md": false, 00:06:01.555 "write_zeroes": true, 00:06:01.555 "zcopy": true, 00:06:01.555 "get_zone_info": false, 00:06:01.555 "zone_management": false, 00:06:01.555 "zone_append": false, 00:06:01.555 "compare": false, 00:06:01.555 "compare_and_write": false, 00:06:01.555 "abort": true, 00:06:01.555 "seek_hole": false, 00:06:01.555 "seek_data": false, 00:06:01.555 "copy": true, 00:06:01.555 "nvme_iov_md": false 00:06:01.555 }, 00:06:01.555 "memory_domains": [ 00:06:01.555 { 00:06:01.555 "dma_device_id": "system", 00:06:01.555 "dma_device_type": 1 00:06:01.555 }, 00:06:01.555 { 00:06:01.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.555 "dma_device_type": 2 00:06:01.555 } 00:06:01.555 ], 00:06:01.555 "driver_specific": {} 00:06:01.555 }, 00:06:01.555 { 00:06:01.555 "name": "Passthru0", 00:06:01.555 "aliases": [ 00:06:01.556 "b75e4cf7-6fbc-5921-bd6f-0bf8e15adacf" 00:06:01.556 ], 00:06:01.556 "product_name": "passthru", 00:06:01.556 "block_size": 512, 00:06:01.556 "num_blocks": 16384, 00:06:01.556 "uuid": "b75e4cf7-6fbc-5921-bd6f-0bf8e15adacf", 00:06:01.556 "assigned_rate_limits": { 00:06:01.556 "rw_ios_per_sec": 0, 00:06:01.556 "rw_mbytes_per_sec": 0, 00:06:01.556 "r_mbytes_per_sec": 0, 00:06:01.556 "w_mbytes_per_sec": 0 00:06:01.556 }, 00:06:01.556 "claimed": false, 00:06:01.556 "zoned": false, 00:06:01.556 "supported_io_types": { 00:06:01.556 "read": true, 00:06:01.556 "write": true, 00:06:01.556 "unmap": true, 00:06:01.556 "flush": true, 00:06:01.556 "reset": true, 00:06:01.556 "nvme_admin": false, 00:06:01.556 "nvme_io": false, 00:06:01.556 "nvme_io_md": false, 00:06:01.556 "write_zeroes": true, 00:06:01.556 "zcopy": true, 00:06:01.556 "get_zone_info": false, 00:06:01.556 "zone_management": false, 00:06:01.556 "zone_append": false, 00:06:01.556 "compare": false, 00:06:01.556 "compare_and_write": false, 00:06:01.556 "abort": true, 00:06:01.556 "seek_hole": false, 00:06:01.556 "seek_data": false, 00:06:01.556 "copy": true, 00:06:01.556 "nvme_iov_md": false 00:06:01.556 }, 00:06:01.556 "memory_domains": [ 00:06:01.556 { 00:06:01.556 "dma_device_id": "system", 00:06:01.556 "dma_device_type": 1 00:06:01.556 }, 00:06:01.556 { 00:06:01.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.556 "dma_device_type": 2 00:06:01.556 } 00:06:01.556 ], 00:06:01.556 "driver_specific": { 00:06:01.556 "passthru": { 00:06:01.556 "name": "Passthru0", 00:06:01.556 "base_bdev_name": "Malloc0" 00:06:01.556 } 00:06:01.556 } 00:06:01.556 } 00:06:01.556 ]' 00:06:01.556 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:01.814 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:01.814 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.814 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.814 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.814 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.814 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:01.814 01:16:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:01.814 00:06:01.814 real 0m0.229s 00:06:01.814 user 0m0.160s 00:06:01.814 sys 0m0.015s 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.814 01:16:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 ************************************ 00:06:01.814 END TEST rpc_integrity 00:06:01.814 ************************************ 00:06:01.814 01:16:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:01.814 01:16:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.814 01:16:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.814 01:16:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 ************************************ 00:06:01.814 START TEST rpc_plugins 00:06:01.814 ************************************ 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:01.814 { 00:06:01.814 "name": "Malloc1", 00:06:01.814 "aliases": [ 00:06:01.814 "22bed490-b525-48dd-a5ed-5e6dfa29bbb5" 00:06:01.814 ], 00:06:01.814 "product_name": "Malloc disk", 00:06:01.814 "block_size": 4096, 00:06:01.814 "num_blocks": 256, 00:06:01.814 "uuid": "22bed490-b525-48dd-a5ed-5e6dfa29bbb5", 00:06:01.814 "assigned_rate_limits": { 00:06:01.814 "rw_ios_per_sec": 0, 00:06:01.814 "rw_mbytes_per_sec": 0, 00:06:01.814 "r_mbytes_per_sec": 0, 00:06:01.814 "w_mbytes_per_sec": 0 00:06:01.814 }, 00:06:01.814 "claimed": false, 00:06:01.814 "zoned": false, 00:06:01.814 "supported_io_types": { 00:06:01.814 "read": true, 00:06:01.814 "write": true, 00:06:01.814 "unmap": true, 00:06:01.814 "flush": true, 00:06:01.814 "reset": true, 00:06:01.814 "nvme_admin": false, 00:06:01.814 "nvme_io": false, 00:06:01.814 "nvme_io_md": false, 00:06:01.814 "write_zeroes": true, 00:06:01.814 "zcopy": true, 00:06:01.814 "get_zone_info": false, 00:06:01.814 "zone_management": false, 00:06:01.814 "zone_append": false, 00:06:01.814 "compare": false, 00:06:01.814 "compare_and_write": false, 00:06:01.814 "abort": true, 00:06:01.814 "seek_hole": false, 00:06:01.814 "seek_data": false, 00:06:01.814 "copy": true, 00:06:01.814 "nvme_iov_md": false 00:06:01.814 }, 00:06:01.814 "memory_domains": [ 00:06:01.814 { 00:06:01.814 "dma_device_id": "system", 00:06:01.814 "dma_device_type": 1 00:06:01.814 }, 00:06:01.814 { 00:06:01.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.814 "dma_device_type": 2 00:06:01.814 } 00:06:01.814 ], 00:06:01.814 "driver_specific": {} 00:06:01.814 } 00:06:01.814 ]' 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:01.814 01:16:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:01.814 00:06:01.814 real 0m0.112s 00:06:01.814 user 0m0.078s 00:06:01.814 sys 0m0.007s 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.814 01:16:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.814 ************************************ 00:06:01.814 END TEST rpc_plugins 00:06:01.814 ************************************ 00:06:01.814 01:16:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:01.814 01:16:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.814 01:16:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.814 01:16:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.072 ************************************ 00:06:02.072 START TEST rpc_trace_cmd_test 00:06:02.072 ************************************ 00:06:02.072 01:16:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:02.072 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:02.072 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:02.072 01:16:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.072 01:16:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.072 01:16:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.072 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:02.072 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1462228", 00:06:02.072 "tpoint_group_mask": "0x8", 00:06:02.072 "iscsi_conn": { 00:06:02.072 "mask": "0x2", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "scsi": { 00:06:02.072 "mask": "0x4", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "bdev": { 00:06:02.072 "mask": "0x8", 00:06:02.072 "tpoint_mask": "0xffffffffffffffff" 00:06:02.072 }, 00:06:02.072 "nvmf_rdma": { 00:06:02.072 "mask": "0x10", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "nvmf_tcp": { 00:06:02.072 "mask": "0x20", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "ftl": { 00:06:02.072 "mask": "0x40", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "blobfs": { 00:06:02.072 "mask": "0x80", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "dsa": { 00:06:02.072 "mask": "0x200", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "thread": { 00:06:02.072 "mask": "0x400", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "nvme_pcie": { 00:06:02.072 "mask": "0x800", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "iaa": { 00:06:02.072 "mask": "0x1000", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "nvme_tcp": { 00:06:02.072 "mask": "0x2000", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "bdev_nvme": { 00:06:02.072 "mask": "0x4000", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "sock": { 00:06:02.072 "mask": "0x8000", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "blob": { 00:06:02.072 "mask": "0x10000", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "bdev_raid": { 00:06:02.072 "mask": "0x20000", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 }, 00:06:02.072 "scheduler": { 00:06:02.072 "mask": "0x40000", 00:06:02.072 "tpoint_mask": "0x0" 00:06:02.072 } 00:06:02.073 }' 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:02.073 00:06:02.073 real 0m0.200s 00:06:02.073 user 0m0.174s 00:06:02.073 sys 0m0.016s 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.073 01:16:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:02.073 ************************************ 00:06:02.073 END TEST rpc_trace_cmd_test 00:06:02.073 ************************************ 00:06:02.073 01:16:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:02.073 01:16:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:02.073 01:16:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:02.073 01:16:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.073 01:16:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.073 01:16:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 ************************************ 00:06:02.332 START TEST rpc_daemon_integrity 00:06:02.332 ************************************ 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.332 { 00:06:02.332 "name": "Malloc2", 00:06:02.332 "aliases": [ 00:06:02.332 "6e036c9b-1ab6-4830-a1c3-f2a802b03fe4" 00:06:02.332 ], 00:06:02.332 "product_name": "Malloc disk", 00:06:02.332 "block_size": 512, 00:06:02.332 "num_blocks": 16384, 00:06:02.332 "uuid": "6e036c9b-1ab6-4830-a1c3-f2a802b03fe4", 00:06:02.332 "assigned_rate_limits": { 00:06:02.332 "rw_ios_per_sec": 0, 00:06:02.332 "rw_mbytes_per_sec": 0, 00:06:02.332 "r_mbytes_per_sec": 0, 00:06:02.332 "w_mbytes_per_sec": 0 00:06:02.332 }, 00:06:02.332 "claimed": false, 00:06:02.332 "zoned": false, 00:06:02.332 "supported_io_types": { 00:06:02.332 "read": true, 00:06:02.332 "write": true, 00:06:02.332 "unmap": true, 00:06:02.332 "flush": true, 00:06:02.332 "reset": true, 00:06:02.332 "nvme_admin": false, 00:06:02.332 "nvme_io": false, 00:06:02.332 "nvme_io_md": false, 00:06:02.332 "write_zeroes": true, 00:06:02.332 "zcopy": true, 00:06:02.332 "get_zone_info": false, 00:06:02.332 "zone_management": false, 00:06:02.332 "zone_append": false, 00:06:02.332 "compare": false, 00:06:02.332 "compare_and_write": false, 00:06:02.332 "abort": true, 00:06:02.332 "seek_hole": false, 00:06:02.332 "seek_data": false, 00:06:02.332 "copy": true, 00:06:02.332 "nvme_iov_md": false 00:06:02.332 }, 00:06:02.332 "memory_domains": [ 00:06:02.332 { 00:06:02.332 "dma_device_id": "system", 00:06:02.332 "dma_device_type": 1 00:06:02.332 }, 00:06:02.332 { 00:06:02.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.332 "dma_device_type": 2 00:06:02.332 } 00:06:02.332 ], 00:06:02.332 "driver_specific": {} 00:06:02.332 } 00:06:02.332 ]' 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 [2024-10-13 01:16:47.765141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:02.332 [2024-10-13 01:16:47.765186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.332 [2024-10-13 01:16:47.765211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18c6c70 00:06:02.332 [2024-10-13 01:16:47.765227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.332 [2024-10-13 01:16:47.766596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.332 [2024-10-13 01:16:47.766624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.332 Passthru0 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.332 { 00:06:02.332 "name": "Malloc2", 00:06:02.332 "aliases": [ 00:06:02.332 "6e036c9b-1ab6-4830-a1c3-f2a802b03fe4" 00:06:02.332 ], 00:06:02.332 "product_name": "Malloc disk", 00:06:02.332 "block_size": 512, 00:06:02.332 "num_blocks": 16384, 00:06:02.332 "uuid": "6e036c9b-1ab6-4830-a1c3-f2a802b03fe4", 00:06:02.332 "assigned_rate_limits": { 00:06:02.332 "rw_ios_per_sec": 0, 00:06:02.332 "rw_mbytes_per_sec": 0, 00:06:02.332 "r_mbytes_per_sec": 0, 00:06:02.332 "w_mbytes_per_sec": 0 00:06:02.332 }, 00:06:02.332 "claimed": true, 00:06:02.332 "claim_type": "exclusive_write", 00:06:02.332 "zoned": false, 00:06:02.332 "supported_io_types": { 00:06:02.332 "read": true, 00:06:02.332 "write": true, 00:06:02.332 "unmap": true, 00:06:02.332 "flush": true, 00:06:02.332 "reset": true, 00:06:02.332 "nvme_admin": false, 00:06:02.332 "nvme_io": false, 00:06:02.332 "nvme_io_md": false, 00:06:02.332 "write_zeroes": true, 00:06:02.332 "zcopy": true, 00:06:02.332 "get_zone_info": false, 00:06:02.332 "zone_management": false, 00:06:02.332 "zone_append": false, 00:06:02.332 "compare": false, 00:06:02.332 "compare_and_write": false, 00:06:02.332 "abort": true, 00:06:02.332 "seek_hole": false, 00:06:02.332 "seek_data": false, 00:06:02.332 "copy": true, 00:06:02.332 "nvme_iov_md": false 00:06:02.332 }, 00:06:02.332 "memory_domains": [ 00:06:02.332 { 00:06:02.332 "dma_device_id": "system", 00:06:02.332 "dma_device_type": 1 00:06:02.332 }, 00:06:02.332 { 00:06:02.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.332 "dma_device_type": 2 00:06:02.332 } 00:06:02.332 ], 00:06:02.332 "driver_specific": {} 00:06:02.332 }, 00:06:02.332 { 00:06:02.332 "name": "Passthru0", 00:06:02.332 "aliases": [ 00:06:02.332 "461cbacb-98ec-50b1-845f-71bf39529e61" 00:06:02.332 ], 00:06:02.332 "product_name": "passthru", 00:06:02.332 "block_size": 512, 00:06:02.332 "num_blocks": 16384, 00:06:02.332 "uuid": "461cbacb-98ec-50b1-845f-71bf39529e61", 00:06:02.332 "assigned_rate_limits": { 00:06:02.332 "rw_ios_per_sec": 0, 00:06:02.332 "rw_mbytes_per_sec": 0, 00:06:02.332 "r_mbytes_per_sec": 0, 00:06:02.332 "w_mbytes_per_sec": 0 00:06:02.332 }, 00:06:02.332 "claimed": false, 00:06:02.332 "zoned": false, 00:06:02.332 "supported_io_types": { 00:06:02.332 "read": true, 00:06:02.332 "write": true, 00:06:02.332 "unmap": true, 00:06:02.332 "flush": true, 00:06:02.332 "reset": true, 00:06:02.332 "nvme_admin": false, 00:06:02.332 "nvme_io": false, 00:06:02.332 "nvme_io_md": false, 00:06:02.332 "write_zeroes": true, 00:06:02.332 "zcopy": true, 00:06:02.332 "get_zone_info": false, 00:06:02.332 "zone_management": false, 00:06:02.332 "zone_append": false, 00:06:02.332 "compare": false, 00:06:02.332 "compare_and_write": false, 00:06:02.332 "abort": true, 00:06:02.332 "seek_hole": false, 00:06:02.332 "seek_data": false, 00:06:02.332 "copy": true, 00:06:02.332 "nvme_iov_md": false 00:06:02.332 }, 00:06:02.332 "memory_domains": [ 00:06:02.332 { 00:06:02.332 "dma_device_id": "system", 00:06:02.332 "dma_device_type": 1 00:06:02.332 }, 00:06:02.332 { 00:06:02.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.332 "dma_device_type": 2 00:06:02.332 } 00:06:02.332 ], 00:06:02.332 "driver_specific": { 00:06:02.332 "passthru": { 00:06:02.332 "name": "Passthru0", 00:06:02.332 "base_bdev_name": "Malloc2" 00:06:02.332 } 00:06:02.332 } 00:06:02.332 } 00:06:02.332 ]' 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.332 00:06:02.332 real 0m0.226s 00:06:02.332 user 0m0.152s 00:06:02.332 sys 0m0.021s 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.332 01:16:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.332 ************************************ 00:06:02.332 END TEST rpc_daemon_integrity 00:06:02.332 ************************************ 00:06:02.332 01:16:47 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:02.332 01:16:47 rpc -- rpc/rpc.sh@84 -- # killprocess 1462228 00:06:02.332 01:16:47 rpc -- common/autotest_common.sh@950 -- # '[' -z 1462228 ']' 00:06:02.332 01:16:47 rpc -- common/autotest_common.sh@954 -- # kill -0 1462228 00:06:02.332 01:16:47 rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.589 01:16:47 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.589 01:16:47 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462228 00:06:02.589 01:16:47 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.589 01:16:47 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.589 01:16:47 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462228' 00:06:02.589 killing process with pid 1462228 00:06:02.589 01:16:47 rpc -- common/autotest_common.sh@969 -- # kill 1462228 00:06:02.589 01:16:47 rpc -- common/autotest_common.sh@974 -- # wait 1462228 00:06:02.846 00:06:02.846 real 0m1.946s 00:06:02.846 user 0m2.446s 00:06:02.846 sys 0m0.612s 00:06:02.846 01:16:48 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.846 01:16:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.846 ************************************ 00:06:02.846 END TEST rpc 00:06:02.846 ************************************ 00:06:02.846 01:16:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:02.846 01:16:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.846 01:16:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.846 01:16:48 -- common/autotest_common.sh@10 -- # set +x 00:06:02.846 ************************************ 00:06:02.846 START TEST skip_rpc 00:06:02.846 ************************************ 00:06:02.846 01:16:48 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:03.105 * Looking for test storage... 00:06:03.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.105 01:16:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:03.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.105 --rc genhtml_branch_coverage=1 00:06:03.105 --rc genhtml_function_coverage=1 00:06:03.105 --rc genhtml_legend=1 00:06:03.105 --rc geninfo_all_blocks=1 00:06:03.105 --rc geninfo_unexecuted_blocks=1 00:06:03.105 00:06:03.105 ' 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:03.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.105 --rc genhtml_branch_coverage=1 00:06:03.105 --rc genhtml_function_coverage=1 00:06:03.105 --rc genhtml_legend=1 00:06:03.105 --rc geninfo_all_blocks=1 00:06:03.105 --rc geninfo_unexecuted_blocks=1 00:06:03.105 00:06:03.105 ' 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:03.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.105 --rc genhtml_branch_coverage=1 00:06:03.105 --rc genhtml_function_coverage=1 00:06:03.105 --rc genhtml_legend=1 00:06:03.105 --rc geninfo_all_blocks=1 00:06:03.105 --rc geninfo_unexecuted_blocks=1 00:06:03.105 00:06:03.105 ' 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:03.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.105 --rc genhtml_branch_coverage=1 00:06:03.105 --rc genhtml_function_coverage=1 00:06:03.105 --rc genhtml_legend=1 00:06:03.105 --rc geninfo_all_blocks=1 00:06:03.105 --rc geninfo_unexecuted_blocks=1 00:06:03.105 00:06:03.105 ' 00:06:03.105 01:16:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.105 01:16:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:03.105 01:16:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.105 01:16:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.105 ************************************ 00:06:03.105 START TEST skip_rpc 00:06:03.105 ************************************ 00:06:03.105 01:16:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:03.105 01:16:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1462558 00:06:03.105 01:16:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:03.105 01:16:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.105 01:16:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:03.105 [2024-10-13 01:16:48.601386] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:03.106 [2024-10-13 01:16:48.601468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462558 ] 00:06:03.106 [2024-10-13 01:16:48.662434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.364 [2024-10-13 01:16:48.713890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1462558 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1462558 ']' 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1462558 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462558 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462558' 00:06:08.627 killing process with pid 1462558 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1462558 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1462558 00:06:08.627 00:06:08.627 real 0m5.427s 00:06:08.627 user 0m5.120s 00:06:08.627 sys 0m0.321s 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.627 01:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 ************************************ 00:06:08.627 END TEST skip_rpc 00:06:08.627 ************************************ 00:06:08.627 01:16:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:08.627 01:16:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.627 01:16:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.627 01:16:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 ************************************ 00:06:08.627 START TEST skip_rpc_with_json 00:06:08.627 ************************************ 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1463245 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1463245 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1463245 ']' 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.627 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.627 [2024-10-13 01:16:54.080132] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:08.627 [2024-10-13 01:16:54.080199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463245 ] 00:06:08.627 [2024-10-13 01:16:54.139014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.628 [2024-10-13 01:16:54.189201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.886 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.886 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:08.886 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:08.886 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.886 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.145 [2024-10-13 01:16:54.467971] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:09.145 request: 00:06:09.145 { 00:06:09.145 "trtype": "tcp", 00:06:09.145 "method": "nvmf_get_transports", 00:06:09.145 "req_id": 1 00:06:09.145 } 00:06:09.145 Got JSON-RPC error response 00:06:09.145 response: 00:06:09.145 { 00:06:09.145 "code": -19, 00:06:09.145 "message": "No such device" 00:06:09.145 } 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.145 [2024-10-13 01:16:54.476100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.145 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:09.145 { 00:06:09.145 "subsystems": [ 00:06:09.145 { 00:06:09.145 "subsystem": "fsdev", 00:06:09.145 "config": [ 00:06:09.145 { 00:06:09.145 "method": "fsdev_set_opts", 00:06:09.145 "params": { 00:06:09.145 "fsdev_io_pool_size": 65535, 00:06:09.145 "fsdev_io_cache_size": 256 00:06:09.145 } 00:06:09.145 } 00:06:09.145 ] 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "subsystem": "vfio_user_target", 00:06:09.145 "config": null 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "subsystem": "keyring", 00:06:09.145 "config": [] 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "subsystem": "iobuf", 00:06:09.145 "config": [ 00:06:09.145 { 00:06:09.145 "method": "iobuf_set_options", 00:06:09.145 "params": { 00:06:09.145 "small_pool_count": 8192, 00:06:09.145 "large_pool_count": 1024, 00:06:09.145 "small_bufsize": 8192, 00:06:09.145 "large_bufsize": 135168 00:06:09.145 } 00:06:09.145 } 00:06:09.145 ] 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "subsystem": "sock", 00:06:09.145 "config": [ 00:06:09.145 { 00:06:09.145 "method": "sock_set_default_impl", 00:06:09.145 "params": { 00:06:09.145 "impl_name": "posix" 00:06:09.145 } 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "method": "sock_impl_set_options", 00:06:09.145 "params": { 00:06:09.145 "impl_name": "ssl", 00:06:09.145 "recv_buf_size": 4096, 00:06:09.145 "send_buf_size": 4096, 00:06:09.145 "enable_recv_pipe": true, 00:06:09.145 "enable_quickack": false, 00:06:09.145 "enable_placement_id": 0, 00:06:09.145 "enable_zerocopy_send_server": true, 00:06:09.145 "enable_zerocopy_send_client": false, 00:06:09.145 "zerocopy_threshold": 0, 00:06:09.145 "tls_version": 0, 00:06:09.145 "enable_ktls": false 00:06:09.145 } 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "method": "sock_impl_set_options", 00:06:09.145 "params": { 00:06:09.145 "impl_name": "posix", 00:06:09.145 "recv_buf_size": 2097152, 00:06:09.145 "send_buf_size": 2097152, 00:06:09.145 "enable_recv_pipe": true, 00:06:09.145 "enable_quickack": false, 00:06:09.145 "enable_placement_id": 0, 00:06:09.145 "enable_zerocopy_send_server": true, 00:06:09.145 "enable_zerocopy_send_client": false, 00:06:09.145 "zerocopy_threshold": 0, 00:06:09.145 "tls_version": 0, 00:06:09.145 "enable_ktls": false 00:06:09.145 } 00:06:09.145 } 00:06:09.145 ] 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "subsystem": "vmd", 00:06:09.145 "config": [] 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "subsystem": "accel", 00:06:09.145 "config": [ 00:06:09.145 { 00:06:09.145 "method": "accel_set_options", 00:06:09.145 "params": { 00:06:09.145 "small_cache_size": 128, 00:06:09.145 "large_cache_size": 16, 00:06:09.145 "task_count": 2048, 00:06:09.145 "sequence_count": 2048, 00:06:09.145 "buf_count": 2048 00:06:09.145 } 00:06:09.145 } 00:06:09.145 ] 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "subsystem": "bdev", 00:06:09.145 "config": [ 00:06:09.145 { 00:06:09.145 "method": "bdev_set_options", 00:06:09.145 "params": { 00:06:09.145 "bdev_io_pool_size": 65535, 00:06:09.145 "bdev_io_cache_size": 256, 00:06:09.145 "bdev_auto_examine": true, 00:06:09.145 "iobuf_small_cache_size": 128, 00:06:09.145 "iobuf_large_cache_size": 16 00:06:09.145 } 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "method": "bdev_raid_set_options", 00:06:09.145 "params": { 00:06:09.145 "process_window_size_kb": 1024, 00:06:09.145 "process_max_bandwidth_mb_sec": 0 00:06:09.145 } 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "method": "bdev_iscsi_set_options", 00:06:09.145 "params": { 00:06:09.145 "timeout_sec": 30 00:06:09.145 } 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "method": "bdev_nvme_set_options", 00:06:09.145 "params": { 00:06:09.145 "action_on_timeout": "none", 00:06:09.145 "timeout_us": 0, 00:06:09.145 "timeout_admin_us": 0, 00:06:09.145 "keep_alive_timeout_ms": 10000, 00:06:09.145 "arbitration_burst": 0, 00:06:09.145 "low_priority_weight": 0, 00:06:09.145 "medium_priority_weight": 0, 00:06:09.145 "high_priority_weight": 0, 00:06:09.145 "nvme_adminq_poll_period_us": 10000, 00:06:09.145 "nvme_ioq_poll_period_us": 0, 00:06:09.145 "io_queue_requests": 0, 00:06:09.145 "delay_cmd_submit": true, 00:06:09.145 "transport_retry_count": 4, 00:06:09.145 "bdev_retry_count": 3, 00:06:09.145 "transport_ack_timeout": 0, 00:06:09.145 "ctrlr_loss_timeout_sec": 0, 00:06:09.145 "reconnect_delay_sec": 0, 00:06:09.145 "fast_io_fail_timeout_sec": 0, 00:06:09.145 "disable_auto_failback": false, 00:06:09.145 "generate_uuids": false, 00:06:09.145 "transport_tos": 0, 00:06:09.145 "nvme_error_stat": false, 00:06:09.145 "rdma_srq_size": 0, 00:06:09.145 "io_path_stat": false, 00:06:09.145 "allow_accel_sequence": false, 00:06:09.145 "rdma_max_cq_size": 0, 00:06:09.145 "rdma_cm_event_timeout_ms": 0, 00:06:09.145 "dhchap_digests": [ 00:06:09.145 "sha256", 00:06:09.145 "sha384", 00:06:09.145 "sha512" 00:06:09.145 ], 00:06:09.145 "dhchap_dhgroups": [ 00:06:09.145 "null", 00:06:09.145 "ffdhe2048", 00:06:09.145 "ffdhe3072", 00:06:09.145 "ffdhe4096", 00:06:09.145 "ffdhe6144", 00:06:09.145 "ffdhe8192" 00:06:09.145 ] 00:06:09.145 } 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "method": "bdev_nvme_set_hotplug", 00:06:09.145 "params": { 00:06:09.145 "period_us": 100000, 00:06:09.145 "enable": false 00:06:09.145 } 00:06:09.145 }, 00:06:09.145 { 00:06:09.145 "method": "bdev_wait_for_examine" 00:06:09.145 } 00:06:09.145 ] 00:06:09.145 }, 00:06:09.145 { 00:06:09.146 "subsystem": "scsi", 00:06:09.146 "config": null 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "subsystem": "scheduler", 00:06:09.146 "config": [ 00:06:09.146 { 00:06:09.146 "method": "framework_set_scheduler", 00:06:09.146 "params": { 00:06:09.146 "name": "static" 00:06:09.146 } 00:06:09.146 } 00:06:09.146 ] 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "subsystem": "vhost_scsi", 00:06:09.146 "config": [] 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "subsystem": "vhost_blk", 00:06:09.146 "config": [] 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "subsystem": "ublk", 00:06:09.146 "config": [] 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "subsystem": "nbd", 00:06:09.146 "config": [] 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "subsystem": "nvmf", 00:06:09.146 "config": [ 00:06:09.146 { 00:06:09.146 "method": "nvmf_set_config", 00:06:09.146 "params": { 00:06:09.146 "discovery_filter": "match_any", 00:06:09.146 "admin_cmd_passthru": { 00:06:09.146 "identify_ctrlr": false 00:06:09.146 }, 00:06:09.146 "dhchap_digests": [ 00:06:09.146 "sha256", 00:06:09.146 "sha384", 00:06:09.146 "sha512" 00:06:09.146 ], 00:06:09.146 "dhchap_dhgroups": [ 00:06:09.146 "null", 00:06:09.146 "ffdhe2048", 00:06:09.146 "ffdhe3072", 00:06:09.146 "ffdhe4096", 00:06:09.146 "ffdhe6144", 00:06:09.146 "ffdhe8192" 00:06:09.146 ] 00:06:09.146 } 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "method": "nvmf_set_max_subsystems", 00:06:09.146 "params": { 00:06:09.146 "max_subsystems": 1024 00:06:09.146 } 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "method": "nvmf_set_crdt", 00:06:09.146 "params": { 00:06:09.146 "crdt1": 0, 00:06:09.146 "crdt2": 0, 00:06:09.146 "crdt3": 0 00:06:09.146 } 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "method": "nvmf_create_transport", 00:06:09.146 "params": { 00:06:09.146 "trtype": "TCP", 00:06:09.146 "max_queue_depth": 128, 00:06:09.146 "max_io_qpairs_per_ctrlr": 127, 00:06:09.146 "in_capsule_data_size": 4096, 00:06:09.146 "max_io_size": 131072, 00:06:09.146 "io_unit_size": 131072, 00:06:09.146 "max_aq_depth": 128, 00:06:09.146 "num_shared_buffers": 511, 00:06:09.146 "buf_cache_size": 4294967295, 00:06:09.146 "dif_insert_or_strip": false, 00:06:09.146 "zcopy": false, 00:06:09.146 "c2h_success": true, 00:06:09.146 "sock_priority": 0, 00:06:09.146 "abort_timeout_sec": 1, 00:06:09.146 "ack_timeout": 0, 00:06:09.146 "data_wr_pool_size": 0 00:06:09.146 } 00:06:09.146 } 00:06:09.146 ] 00:06:09.146 }, 00:06:09.146 { 00:06:09.146 "subsystem": "iscsi", 00:06:09.146 "config": [ 00:06:09.146 { 00:06:09.146 "method": "iscsi_set_options", 00:06:09.146 "params": { 00:06:09.146 "node_base": "iqn.2016-06.io.spdk", 00:06:09.146 "max_sessions": 128, 00:06:09.146 "max_connections_per_session": 2, 00:06:09.146 "max_queue_depth": 64, 00:06:09.146 "default_time2wait": 2, 00:06:09.146 "default_time2retain": 20, 00:06:09.146 "first_burst_length": 8192, 00:06:09.146 "immediate_data": true, 00:06:09.146 "allow_duplicated_isid": false, 00:06:09.146 "error_recovery_level": 0, 00:06:09.146 "nop_timeout": 60, 00:06:09.146 "nop_in_interval": 30, 00:06:09.146 "disable_chap": false, 00:06:09.146 "require_chap": false, 00:06:09.146 "mutual_chap": false, 00:06:09.146 "chap_group": 0, 00:06:09.146 "max_large_datain_per_connection": 64, 00:06:09.146 "max_r2t_per_connection": 4, 00:06:09.146 "pdu_pool_size": 36864, 00:06:09.146 "immediate_data_pool_size": 16384, 00:06:09.146 "data_out_pool_size": 2048 00:06:09.146 } 00:06:09.146 } 00:06:09.146 ] 00:06:09.146 } 00:06:09.146 ] 00:06:09.146 } 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1463245 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1463245 ']' 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1463245 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1463245 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1463245' 00:06:09.146 killing process with pid 1463245 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1463245 00:06:09.146 01:16:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1463245 00:06:09.711 01:16:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1463385 00:06:09.711 01:16:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:09.711 01:16:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1463385 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1463385 ']' 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1463385 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1463385 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1463385' 00:06:14.977 killing process with pid 1463385 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1463385 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1463385 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:14.977 00:06:14.977 real 0m6.476s 00:06:14.977 user 0m6.085s 00:06:14.977 sys 0m0.708s 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.977 ************************************ 00:06:14.977 END TEST skip_rpc_with_json 00:06:14.977 ************************************ 00:06:14.977 01:17:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:14.977 01:17:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.977 01:17:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.977 01:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.977 ************************************ 00:06:14.977 START TEST skip_rpc_with_delay 00:06:14.977 ************************************ 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.977 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.236 [2024-10-13 01:17:00.614736] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.236 00:06:15.236 real 0m0.076s 00:06:15.236 user 0m0.047s 00:06:15.236 sys 0m0.029s 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.236 01:17:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:15.236 ************************************ 00:06:15.236 END TEST skip_rpc_with_delay 00:06:15.236 ************************************ 00:06:15.236 01:17:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:15.236 01:17:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:15.236 01:17:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:15.236 01:17:00 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.236 01:17:00 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.236 01:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.236 ************************************ 00:06:15.236 START TEST exit_on_failed_rpc_init 00:06:15.236 ************************************ 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1464102 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1464102 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1464102 ']' 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.236 01:17:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:15.236 [2024-10-13 01:17:00.729705] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:15.236 [2024-10-13 01:17:00.729796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464102 ] 00:06:15.236 [2024-10-13 01:17:00.790795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.495 [2024-10-13 01:17:00.844672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.752 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.752 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:15.752 01:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.752 01:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.752 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:15.752 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.753 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.753 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.753 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.753 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.753 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.753 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.753 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.753 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:15.753 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.753 [2024-10-13 01:17:01.186295] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:15.753 [2024-10-13 01:17:01.186395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464149 ] 00:06:15.753 [2024-10-13 01:17:01.248695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.753 [2024-10-13 01:17:01.299099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.753 [2024-10-13 01:17:01.299234] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:15.753 [2024-10-13 01:17:01.299257] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:15.753 [2024-10-13 01:17:01.299271] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1464102 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1464102 ']' 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1464102 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464102 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464102' 00:06:16.011 killing process with pid 1464102 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1464102 00:06:16.011 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1464102 00:06:16.269 00:06:16.269 real 0m1.122s 00:06:16.269 user 0m1.229s 00:06:16.269 sys 0m0.461s 00:06:16.269 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.269 01:17:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.269 ************************************ 00:06:16.269 END TEST exit_on_failed_rpc_init 00:06:16.269 ************************************ 00:06:16.269 01:17:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.269 00:06:16.269 real 0m13.440s 00:06:16.269 user 0m12.642s 00:06:16.269 sys 0m1.718s 00:06:16.269 01:17:01 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.269 01:17:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.269 ************************************ 00:06:16.269 END TEST skip_rpc 00:06:16.269 ************************************ 00:06:16.528 01:17:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:16.528 01:17:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.528 01:17:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.528 01:17:01 -- common/autotest_common.sh@10 -- # set +x 00:06:16.528 ************************************ 00:06:16.528 START TEST rpc_client 00:06:16.528 ************************************ 00:06:16.528 01:17:01 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:16.528 * Looking for test storage... 00:06:16.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:16.528 01:17:01 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:16.528 01:17:01 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:16.528 01:17:01 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:16.528 01:17:02 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.528 01:17:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:16.528 01:17:02 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.528 01:17:02 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:16.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.528 --rc genhtml_branch_coverage=1 00:06:16.528 --rc genhtml_function_coverage=1 00:06:16.528 --rc genhtml_legend=1 00:06:16.528 --rc geninfo_all_blocks=1 00:06:16.528 --rc geninfo_unexecuted_blocks=1 00:06:16.528 00:06:16.528 ' 00:06:16.528 01:17:02 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:16.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.528 --rc genhtml_branch_coverage=1 00:06:16.528 --rc genhtml_function_coverage=1 00:06:16.528 --rc genhtml_legend=1 00:06:16.529 --rc geninfo_all_blocks=1 00:06:16.529 --rc geninfo_unexecuted_blocks=1 00:06:16.529 00:06:16.529 ' 00:06:16.529 01:17:02 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:16.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.529 --rc genhtml_branch_coverage=1 00:06:16.529 --rc genhtml_function_coverage=1 00:06:16.529 --rc genhtml_legend=1 00:06:16.529 --rc geninfo_all_blocks=1 00:06:16.529 --rc geninfo_unexecuted_blocks=1 00:06:16.529 00:06:16.529 ' 00:06:16.529 01:17:02 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:16.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.529 --rc genhtml_branch_coverage=1 00:06:16.529 --rc genhtml_function_coverage=1 00:06:16.529 --rc genhtml_legend=1 00:06:16.529 --rc geninfo_all_blocks=1 00:06:16.529 --rc geninfo_unexecuted_blocks=1 00:06:16.529 00:06:16.529 ' 00:06:16.529 01:17:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:16.529 OK 00:06:16.529 01:17:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:16.529 00:06:16.529 real 0m0.155s 00:06:16.529 user 0m0.102s 00:06:16.529 sys 0m0.061s 00:06:16.529 01:17:02 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.529 01:17:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:16.529 ************************************ 00:06:16.529 END TEST rpc_client 00:06:16.529 ************************************ 00:06:16.529 01:17:02 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:16.529 01:17:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.529 01:17:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.529 01:17:02 -- common/autotest_common.sh@10 -- # set +x 00:06:16.529 ************************************ 00:06:16.529 START TEST json_config 00:06:16.529 ************************************ 00:06:16.529 01:17:02 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:16.788 01:17:02 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:16.788 01:17:02 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:16.788 01:17:02 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:16.788 01:17:02 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:16.788 01:17:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.788 01:17:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.788 01:17:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.788 01:17:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.788 01:17:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.788 01:17:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.788 01:17:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.788 01:17:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.788 01:17:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.788 01:17:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.788 01:17:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.788 01:17:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:16.788 01:17:02 json_config -- scripts/common.sh@345 -- # : 1 00:06:16.788 01:17:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.788 01:17:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.788 01:17:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:16.788 01:17:02 json_config -- scripts/common.sh@353 -- # local d=1 00:06:16.788 01:17:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.788 01:17:02 json_config -- scripts/common.sh@355 -- # echo 1 00:06:16.788 01:17:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.788 01:17:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:16.788 01:17:02 json_config -- scripts/common.sh@353 -- # local d=2 00:06:16.788 01:17:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.788 01:17:02 json_config -- scripts/common.sh@355 -- # echo 2 00:06:16.788 01:17:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.788 01:17:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.788 01:17:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.788 01:17:02 json_config -- scripts/common.sh@368 -- # return 0 00:06:16.788 01:17:02 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.788 01:17:02 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:16.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.788 --rc genhtml_branch_coverage=1 00:06:16.788 --rc genhtml_function_coverage=1 00:06:16.788 --rc genhtml_legend=1 00:06:16.788 --rc geninfo_all_blocks=1 00:06:16.788 --rc geninfo_unexecuted_blocks=1 00:06:16.788 00:06:16.788 ' 00:06:16.788 01:17:02 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:16.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.788 --rc genhtml_branch_coverage=1 00:06:16.788 --rc genhtml_function_coverage=1 00:06:16.788 --rc genhtml_legend=1 00:06:16.788 --rc geninfo_all_blocks=1 00:06:16.788 --rc geninfo_unexecuted_blocks=1 00:06:16.788 00:06:16.788 ' 00:06:16.788 01:17:02 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:16.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.788 --rc genhtml_branch_coverage=1 00:06:16.788 --rc genhtml_function_coverage=1 00:06:16.788 --rc genhtml_legend=1 00:06:16.788 --rc geninfo_all_blocks=1 00:06:16.788 --rc geninfo_unexecuted_blocks=1 00:06:16.788 00:06:16.788 ' 00:06:16.788 01:17:02 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:16.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.788 --rc genhtml_branch_coverage=1 00:06:16.788 --rc genhtml_function_coverage=1 00:06:16.788 --rc genhtml_legend=1 00:06:16.788 --rc geninfo_all_blocks=1 00:06:16.788 --rc geninfo_unexecuted_blocks=1 00:06:16.788 00:06:16.788 ' 00:06:16.788 01:17:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.788 01:17:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.788 01:17:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.788 01:17:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.788 01:17:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.788 01:17:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.788 01:17:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.788 01:17:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.788 01:17:02 json_config -- paths/export.sh@5 -- # export PATH 00:06:16.788 01:17:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@51 -- # : 0 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.788 01:17:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.788 01:17:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:16.788 01:17:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:16.788 01:17:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:16.788 01:17:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:16.789 INFO: JSON configuration test init 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.789 01:17:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:16.789 01:17:02 json_config -- json_config/common.sh@9 -- # local app=target 00:06:16.789 01:17:02 json_config -- json_config/common.sh@10 -- # shift 00:06:16.789 01:17:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.789 01:17:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.789 01:17:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.789 01:17:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.789 01:17:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.789 01:17:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1464391 00:06:16.789 01:17:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:16.789 01:17:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.789 Waiting for target to run... 00:06:16.789 01:17:02 json_config -- json_config/common.sh@25 -- # waitforlisten 1464391 /var/tmp/spdk_tgt.sock 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@831 -- # '[' -z 1464391 ']' 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.789 01:17:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.789 [2024-10-13 01:17:02.281397] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:16.789 [2024-10-13 01:17:02.281520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464391 ] 00:06:17.356 [2024-10-13 01:17:02.794390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.356 [2024-10-13 01:17:02.840222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.921 01:17:03 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.921 01:17:03 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:17.921 01:17:03 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.921 00:06:17.921 01:17:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:17.921 01:17:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:17.922 01:17:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.922 01:17:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.922 01:17:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:17.922 01:17:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:17.922 01:17:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.922 01:17:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.922 01:17:03 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:17.922 01:17:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:17.922 01:17:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:21.203 01:17:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:21.203 01:17:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:21.203 01:17:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@54 -- # sort 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:21.203 01:17:06 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:21.203 01:17:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.203 01:17:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:21.460 01:17:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:21.460 01:17:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:21.460 01:17:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:21.460 01:17:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:21.718 MallocForNvmf0 00:06:21.718 01:17:07 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:21.718 01:17:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:21.976 MallocForNvmf1 00:06:21.976 01:17:07 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:21.976 01:17:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:22.234 [2024-10-13 01:17:07.612119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.234 01:17:07 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:22.234 01:17:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:22.493 01:17:07 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:22.493 01:17:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:22.752 01:17:08 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:22.752 01:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:23.010 01:17:08 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:23.010 01:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:23.268 [2024-10-13 01:17:08.715668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:23.268 01:17:08 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:23.268 01:17:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.268 01:17:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.268 01:17:08 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:23.268 01:17:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.268 01:17:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.268 01:17:08 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:23.268 01:17:08 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:23.268 01:17:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:23.526 MallocBdevForConfigChangeCheck 00:06:23.526 01:17:09 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:23.526 01:17:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.526 01:17:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.526 01:17:09 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:23.526 01:17:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:24.090 01:17:09 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:24.090 INFO: shutting down applications... 00:06:24.090 01:17:09 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:24.090 01:17:09 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:24.090 01:17:09 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:24.090 01:17:09 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:25.990 Calling clear_iscsi_subsystem 00:06:25.990 Calling clear_nvmf_subsystem 00:06:25.990 Calling clear_nbd_subsystem 00:06:25.990 Calling clear_ublk_subsystem 00:06:25.990 Calling clear_vhost_blk_subsystem 00:06:25.990 Calling clear_vhost_scsi_subsystem 00:06:25.990 Calling clear_bdev_subsystem 00:06:25.990 01:17:11 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:25.990 01:17:11 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:25.990 01:17:11 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:25.990 01:17:11 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:25.990 01:17:11 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:25.990 01:17:11 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:25.990 01:17:11 json_config -- json_config/json_config.sh@352 -- # break 00:06:25.990 01:17:11 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:25.990 01:17:11 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:25.990 01:17:11 json_config -- json_config/common.sh@31 -- # local app=target 00:06:25.990 01:17:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:25.990 01:17:11 json_config -- json_config/common.sh@35 -- # [[ -n 1464391 ]] 00:06:25.990 01:17:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1464391 00:06:25.990 01:17:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:25.990 01:17:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:25.990 01:17:11 json_config -- json_config/common.sh@41 -- # kill -0 1464391 00:06:25.990 01:17:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.633 01:17:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.633 01:17:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.633 01:17:12 json_config -- json_config/common.sh@41 -- # kill -0 1464391 00:06:26.633 01:17:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:26.633 01:17:12 json_config -- json_config/common.sh@43 -- # break 00:06:26.633 01:17:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:26.633 01:17:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:26.633 SPDK target shutdown done 00:06:26.633 01:17:12 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:26.633 INFO: relaunching applications... 00:06:26.633 01:17:12 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.633 01:17:12 json_config -- json_config/common.sh@9 -- # local app=target 00:06:26.633 01:17:12 json_config -- json_config/common.sh@10 -- # shift 00:06:26.633 01:17:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:26.633 01:17:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:26.633 01:17:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:26.633 01:17:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:26.633 01:17:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:26.633 01:17:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1465689 00:06:26.633 01:17:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:26.633 Waiting for target to run... 00:06:26.633 01:17:12 json_config -- json_config/common.sh@25 -- # waitforlisten 1465689 /var/tmp/spdk_tgt.sock 00:06:26.633 01:17:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:26.633 01:17:12 json_config -- common/autotest_common.sh@831 -- # '[' -z 1465689 ']' 00:06:26.633 01:17:12 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:26.633 01:17:12 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.633 01:17:12 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:26.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:26.633 01:17:12 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.633 01:17:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.633 [2024-10-13 01:17:12.129438] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:26.633 [2024-10-13 01:17:12.129553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465689 ] 00:06:26.914 [2024-10-13 01:17:12.469897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.173 [2024-10-13 01:17:12.505697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.454 [2024-10-13 01:17:15.553439] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.454 [2024-10-13 01:17:15.585897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:30.454 01:17:15 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.454 01:17:15 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:30.454 01:17:15 json_config -- json_config/common.sh@26 -- # echo '' 00:06:30.454 00:06:30.454 01:17:15 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:30.454 01:17:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:30.454 INFO: Checking if target configuration is the same... 00:06:30.454 01:17:15 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.454 01:17:15 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:30.454 01:17:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.454 + '[' 2 -ne 2 ']' 00:06:30.454 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:30.454 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:30.454 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.454 +++ basename /dev/fd/62 00:06:30.454 ++ mktemp /tmp/62.XXX 00:06:30.454 + tmp_file_1=/tmp/62.flN 00:06:30.454 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.454 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.454 + tmp_file_2=/tmp/spdk_tgt_config.json.lS7 00:06:30.454 + ret=0 00:06:30.454 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.712 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.712 + diff -u /tmp/62.flN /tmp/spdk_tgt_config.json.lS7 00:06:30.712 + echo 'INFO: JSON config files are the same' 00:06:30.712 INFO: JSON config files are the same 00:06:30.712 + rm /tmp/62.flN /tmp/spdk_tgt_config.json.lS7 00:06:30.712 + exit 0 00:06:30.712 01:17:16 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:30.712 01:17:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:30.712 INFO: changing configuration and checking if this can be detected... 00:06:30.712 01:17:16 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.712 01:17:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.970 01:17:16 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.970 01:17:16 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:30.970 01:17:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.970 + '[' 2 -ne 2 ']' 00:06:30.970 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:30.970 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:30.970 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.970 +++ basename /dev/fd/62 00:06:30.970 ++ mktemp /tmp/62.XXX 00:06:30.970 + tmp_file_1=/tmp/62.CWj 00:06:30.970 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.970 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.970 + tmp_file_2=/tmp/spdk_tgt_config.json.PNE 00:06:30.970 + ret=0 00:06:30.970 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.228 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.487 + diff -u /tmp/62.CWj /tmp/spdk_tgt_config.json.PNE 00:06:31.487 + ret=1 00:06:31.487 + echo '=== Start of file: /tmp/62.CWj ===' 00:06:31.487 + cat /tmp/62.CWj 00:06:31.487 + echo '=== End of file: /tmp/62.CWj ===' 00:06:31.487 + echo '' 00:06:31.487 + echo '=== Start of file: /tmp/spdk_tgt_config.json.PNE ===' 00:06:31.487 + cat /tmp/spdk_tgt_config.json.PNE 00:06:31.487 + echo '=== End of file: /tmp/spdk_tgt_config.json.PNE ===' 00:06:31.487 + echo '' 00:06:31.487 + rm /tmp/62.CWj /tmp/spdk_tgt_config.json.PNE 00:06:31.487 + exit 1 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:31.487 INFO: configuration change detected. 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 1465689 ]] 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.487 01:17:16 json_config -- json_config/json_config.sh@330 -- # killprocess 1465689 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@950 -- # '[' -z 1465689 ']' 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@954 -- # kill -0 1465689 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@955 -- # uname 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465689 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465689' 00:06:31.487 killing process with pid 1465689 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@969 -- # kill 1465689 00:06:31.487 01:17:16 json_config -- common/autotest_common.sh@974 -- # wait 1465689 00:06:33.388 01:17:18 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.388 01:17:18 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:33.388 01:17:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:33.388 01:17:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.388 01:17:18 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:33.388 01:17:18 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:33.388 INFO: Success 00:06:33.388 00:06:33.388 real 0m16.416s 00:06:33.388 user 0m18.589s 00:06:33.388 sys 0m2.055s 00:06:33.388 01:17:18 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.388 01:17:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.388 ************************************ 00:06:33.388 END TEST json_config 00:06:33.388 ************************************ 00:06:33.388 01:17:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:33.388 01:17:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.388 01:17:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.388 01:17:18 -- common/autotest_common.sh@10 -- # set +x 00:06:33.388 ************************************ 00:06:33.388 START TEST json_config_extra_key 00:06:33.388 ************************************ 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:33.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.388 --rc genhtml_branch_coverage=1 00:06:33.388 --rc genhtml_function_coverage=1 00:06:33.388 --rc genhtml_legend=1 00:06:33.388 --rc geninfo_all_blocks=1 00:06:33.388 --rc geninfo_unexecuted_blocks=1 00:06:33.388 00:06:33.388 ' 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:33.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.388 --rc genhtml_branch_coverage=1 00:06:33.388 --rc genhtml_function_coverage=1 00:06:33.388 --rc genhtml_legend=1 00:06:33.388 --rc geninfo_all_blocks=1 00:06:33.388 --rc geninfo_unexecuted_blocks=1 00:06:33.388 00:06:33.388 ' 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:33.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.388 --rc genhtml_branch_coverage=1 00:06:33.388 --rc genhtml_function_coverage=1 00:06:33.388 --rc genhtml_legend=1 00:06:33.388 --rc geninfo_all_blocks=1 00:06:33.388 --rc geninfo_unexecuted_blocks=1 00:06:33.388 00:06:33.388 ' 00:06:33.388 01:17:18 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:33.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.388 --rc genhtml_branch_coverage=1 00:06:33.388 --rc genhtml_function_coverage=1 00:06:33.388 --rc genhtml_legend=1 00:06:33.388 --rc geninfo_all_blocks=1 00:06:33.388 --rc geninfo_unexecuted_blocks=1 00:06:33.388 00:06:33.388 ' 00:06:33.388 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.388 01:17:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.388 01:17:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.389 01:17:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.389 01:17:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.389 01:17:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.389 01:17:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:33.389 01:17:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:33.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:33.389 01:17:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:33.389 INFO: launching applications... 00:06:33.389 01:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1466612 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.389 Waiting for target to run... 00:06:33.389 01:17:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1466612 /var/tmp/spdk_tgt.sock 00:06:33.389 01:17:18 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1466612 ']' 00:06:33.389 01:17:18 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.389 01:17:18 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.389 01:17:18 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.389 01:17:18 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.389 01:17:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:33.389 [2024-10-13 01:17:18.738236] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:33.389 [2024-10-13 01:17:18.738315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466612 ] 00:06:33.647 [2024-10-13 01:17:19.074875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.647 [2024-10-13 01:17:19.109820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.213 01:17:19 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.213 01:17:19 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:34.213 01:17:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:34.213 00:06:34.213 01:17:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:34.213 INFO: shutting down applications... 00:06:34.213 01:17:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:34.213 01:17:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:34.213 01:17:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:34.213 01:17:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1466612 ]] 00:06:34.213 01:17:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1466612 00:06:34.213 01:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:34.213 01:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.213 01:17:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1466612 00:06:34.213 01:17:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:34.780 01:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:34.780 01:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:34.780 01:17:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1466612 00:06:34.780 01:17:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:34.780 01:17:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:34.780 01:17:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:34.780 01:17:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:34.780 SPDK target shutdown done 00:06:34.780 01:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:34.780 Success 00:06:34.780 00:06:34.780 real 0m1.679s 00:06:34.780 user 0m1.650s 00:06:34.780 sys 0m0.456s 00:06:34.780 01:17:20 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.780 01:17:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:34.780 ************************************ 00:06:34.780 END TEST json_config_extra_key 00:06:34.780 ************************************ 00:06:34.780 01:17:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.780 01:17:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.780 01:17:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.780 01:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:34.780 ************************************ 00:06:34.780 START TEST alias_rpc 00:06:34.780 ************************************ 00:06:34.780 01:17:20 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:34.780 * Looking for test storage... 00:06:34.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:34.780 01:17:20 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.780 01:17:20 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.780 01:17:20 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:35.038 01:17:20 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:35.038 01:17:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.039 01:17:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.039 01:17:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.039 01:17:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:35.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.039 --rc genhtml_branch_coverage=1 00:06:35.039 --rc genhtml_function_coverage=1 00:06:35.039 --rc genhtml_legend=1 00:06:35.039 --rc geninfo_all_blocks=1 00:06:35.039 --rc geninfo_unexecuted_blocks=1 00:06:35.039 00:06:35.039 ' 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:35.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.039 --rc genhtml_branch_coverage=1 00:06:35.039 --rc genhtml_function_coverage=1 00:06:35.039 --rc genhtml_legend=1 00:06:35.039 --rc geninfo_all_blocks=1 00:06:35.039 --rc geninfo_unexecuted_blocks=1 00:06:35.039 00:06:35.039 ' 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:35.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.039 --rc genhtml_branch_coverage=1 00:06:35.039 --rc genhtml_function_coverage=1 00:06:35.039 --rc genhtml_legend=1 00:06:35.039 --rc geninfo_all_blocks=1 00:06:35.039 --rc geninfo_unexecuted_blocks=1 00:06:35.039 00:06:35.039 ' 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:35.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.039 --rc genhtml_branch_coverage=1 00:06:35.039 --rc genhtml_function_coverage=1 00:06:35.039 --rc genhtml_legend=1 00:06:35.039 --rc geninfo_all_blocks=1 00:06:35.039 --rc geninfo_unexecuted_blocks=1 00:06:35.039 00:06:35.039 ' 00:06:35.039 01:17:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:35.039 01:17:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1466928 00:06:35.039 01:17:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1466928 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1466928 ']' 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.039 01:17:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.039 01:17:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.039 [2024-10-13 01:17:20.464193] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:35.039 [2024-10-13 01:17:20.464278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466928 ] 00:06:35.039 [2024-10-13 01:17:20.525311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.039 [2024-10-13 01:17:20.574783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.297 01:17:20 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.297 01:17:20 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:35.297 01:17:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:35.863 01:17:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1466928 00:06:35.863 01:17:21 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1466928 ']' 00:06:35.863 01:17:21 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1466928 00:06:35.863 01:17:21 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:35.863 01:17:21 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.863 01:17:21 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1466928 00:06:35.863 01:17:21 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.863 01:17:21 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.863 01:17:21 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1466928' 00:06:35.864 killing process with pid 1466928 00:06:35.864 01:17:21 alias_rpc -- common/autotest_common.sh@969 -- # kill 1466928 00:06:35.864 01:17:21 alias_rpc -- common/autotest_common.sh@974 -- # wait 1466928 00:06:36.122 00:06:36.122 real 0m1.301s 00:06:36.122 user 0m1.406s 00:06:36.122 sys 0m0.466s 00:06:36.122 01:17:21 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.122 01:17:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.122 ************************************ 00:06:36.122 END TEST alias_rpc 00:06:36.122 ************************************ 00:06:36.122 01:17:21 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:36.122 01:17:21 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:36.122 01:17:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.122 01:17:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.122 01:17:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.122 ************************************ 00:06:36.122 START TEST spdkcli_tcp 00:06:36.122 ************************************ 00:06:36.122 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:36.122 * Looking for test storage... 00:06:36.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:36.122 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:36.122 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:36.122 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.381 01:17:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:36.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.381 --rc genhtml_branch_coverage=1 00:06:36.381 --rc genhtml_function_coverage=1 00:06:36.381 --rc genhtml_legend=1 00:06:36.381 --rc geninfo_all_blocks=1 00:06:36.381 --rc geninfo_unexecuted_blocks=1 00:06:36.381 00:06:36.381 ' 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:36.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.381 --rc genhtml_branch_coverage=1 00:06:36.381 --rc genhtml_function_coverage=1 00:06:36.381 --rc genhtml_legend=1 00:06:36.381 --rc geninfo_all_blocks=1 00:06:36.381 --rc geninfo_unexecuted_blocks=1 00:06:36.381 00:06:36.381 ' 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:36.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.381 --rc genhtml_branch_coverage=1 00:06:36.381 --rc genhtml_function_coverage=1 00:06:36.381 --rc genhtml_legend=1 00:06:36.381 --rc geninfo_all_blocks=1 00:06:36.381 --rc geninfo_unexecuted_blocks=1 00:06:36.381 00:06:36.381 ' 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:36.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.381 --rc genhtml_branch_coverage=1 00:06:36.381 --rc genhtml_function_coverage=1 00:06:36.381 --rc genhtml_legend=1 00:06:36.381 --rc geninfo_all_blocks=1 00:06:36.381 --rc geninfo_unexecuted_blocks=1 00:06:36.381 00:06:36.381 ' 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1467128 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:36.381 01:17:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1467128 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1467128 ']' 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.381 01:17:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.381 [2024-10-13 01:17:21.809376] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:36.381 [2024-10-13 01:17:21.809482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467128 ] 00:06:36.381 [2024-10-13 01:17:21.865835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.381 [2024-10-13 01:17:21.914592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.381 [2024-10-13 01:17:21.914599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.639 01:17:22 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.639 01:17:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:36.639 01:17:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1467136 00:06:36.639 01:17:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:36.639 01:17:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:36.898 [ 00:06:36.898 "bdev_malloc_delete", 00:06:36.898 "bdev_malloc_create", 00:06:36.898 "bdev_null_resize", 00:06:36.898 "bdev_null_delete", 00:06:36.898 "bdev_null_create", 00:06:36.898 "bdev_nvme_cuse_unregister", 00:06:36.898 "bdev_nvme_cuse_register", 00:06:36.898 "bdev_opal_new_user", 00:06:36.898 "bdev_opal_set_lock_state", 00:06:36.898 "bdev_opal_delete", 00:06:36.898 "bdev_opal_get_info", 00:06:36.898 "bdev_opal_create", 00:06:36.898 "bdev_nvme_opal_revert", 00:06:36.898 "bdev_nvme_opal_init", 00:06:36.898 "bdev_nvme_send_cmd", 00:06:36.898 "bdev_nvme_set_keys", 00:06:36.898 "bdev_nvme_get_path_iostat", 00:06:36.898 "bdev_nvme_get_mdns_discovery_info", 00:06:36.898 "bdev_nvme_stop_mdns_discovery", 00:06:36.898 "bdev_nvme_start_mdns_discovery", 00:06:36.898 "bdev_nvme_set_multipath_policy", 00:06:36.898 "bdev_nvme_set_preferred_path", 00:06:36.898 "bdev_nvme_get_io_paths", 00:06:36.898 "bdev_nvme_remove_error_injection", 00:06:36.898 "bdev_nvme_add_error_injection", 00:06:36.898 "bdev_nvme_get_discovery_info", 00:06:36.898 "bdev_nvme_stop_discovery", 00:06:36.898 "bdev_nvme_start_discovery", 00:06:36.898 "bdev_nvme_get_controller_health_info", 00:06:36.898 "bdev_nvme_disable_controller", 00:06:36.898 "bdev_nvme_enable_controller", 00:06:36.898 "bdev_nvme_reset_controller", 00:06:36.898 "bdev_nvme_get_transport_statistics", 00:06:36.898 "bdev_nvme_apply_firmware", 00:06:36.898 "bdev_nvme_detach_controller", 00:06:36.898 "bdev_nvme_get_controllers", 00:06:36.898 "bdev_nvme_attach_controller", 00:06:36.898 "bdev_nvme_set_hotplug", 00:06:36.898 "bdev_nvme_set_options", 00:06:36.898 "bdev_passthru_delete", 00:06:36.898 "bdev_passthru_create", 00:06:36.898 "bdev_lvol_set_parent_bdev", 00:06:36.898 "bdev_lvol_set_parent", 00:06:36.898 "bdev_lvol_check_shallow_copy", 00:06:36.898 "bdev_lvol_start_shallow_copy", 00:06:36.898 "bdev_lvol_grow_lvstore", 00:06:36.898 "bdev_lvol_get_lvols", 00:06:36.898 "bdev_lvol_get_lvstores", 00:06:36.898 "bdev_lvol_delete", 00:06:36.898 "bdev_lvol_set_read_only", 00:06:36.898 "bdev_lvol_resize", 00:06:36.898 "bdev_lvol_decouple_parent", 00:06:36.898 "bdev_lvol_inflate", 00:06:36.898 "bdev_lvol_rename", 00:06:36.898 "bdev_lvol_clone_bdev", 00:06:36.898 "bdev_lvol_clone", 00:06:36.898 "bdev_lvol_snapshot", 00:06:36.898 "bdev_lvol_create", 00:06:36.898 "bdev_lvol_delete_lvstore", 00:06:36.898 "bdev_lvol_rename_lvstore", 00:06:36.898 "bdev_lvol_create_lvstore", 00:06:36.898 "bdev_raid_set_options", 00:06:36.898 "bdev_raid_remove_base_bdev", 00:06:36.898 "bdev_raid_add_base_bdev", 00:06:36.898 "bdev_raid_delete", 00:06:36.898 "bdev_raid_create", 00:06:36.898 "bdev_raid_get_bdevs", 00:06:36.898 "bdev_error_inject_error", 00:06:36.898 "bdev_error_delete", 00:06:36.898 "bdev_error_create", 00:06:36.898 "bdev_split_delete", 00:06:36.898 "bdev_split_create", 00:06:36.898 "bdev_delay_delete", 00:06:36.898 "bdev_delay_create", 00:06:36.898 "bdev_delay_update_latency", 00:06:36.898 "bdev_zone_block_delete", 00:06:36.898 "bdev_zone_block_create", 00:06:36.898 "blobfs_create", 00:06:36.898 "blobfs_detect", 00:06:36.898 "blobfs_set_cache_size", 00:06:36.898 "bdev_aio_delete", 00:06:36.898 "bdev_aio_rescan", 00:06:36.898 "bdev_aio_create", 00:06:36.898 "bdev_ftl_set_property", 00:06:36.898 "bdev_ftl_get_properties", 00:06:36.898 "bdev_ftl_get_stats", 00:06:36.898 "bdev_ftl_unmap", 00:06:36.898 "bdev_ftl_unload", 00:06:36.898 "bdev_ftl_delete", 00:06:36.898 "bdev_ftl_load", 00:06:36.898 "bdev_ftl_create", 00:06:36.898 "bdev_virtio_attach_controller", 00:06:36.898 "bdev_virtio_scsi_get_devices", 00:06:36.898 "bdev_virtio_detach_controller", 00:06:36.898 "bdev_virtio_blk_set_hotplug", 00:06:36.898 "bdev_iscsi_delete", 00:06:36.898 "bdev_iscsi_create", 00:06:36.898 "bdev_iscsi_set_options", 00:06:36.898 "accel_error_inject_error", 00:06:36.898 "ioat_scan_accel_module", 00:06:36.898 "dsa_scan_accel_module", 00:06:36.898 "iaa_scan_accel_module", 00:06:36.898 "vfu_virtio_create_fs_endpoint", 00:06:36.898 "vfu_virtio_create_scsi_endpoint", 00:06:36.898 "vfu_virtio_scsi_remove_target", 00:06:36.898 "vfu_virtio_scsi_add_target", 00:06:36.898 "vfu_virtio_create_blk_endpoint", 00:06:36.898 "vfu_virtio_delete_endpoint", 00:06:36.898 "keyring_file_remove_key", 00:06:36.898 "keyring_file_add_key", 00:06:36.898 "keyring_linux_set_options", 00:06:36.898 "fsdev_aio_delete", 00:06:36.898 "fsdev_aio_create", 00:06:36.898 "iscsi_get_histogram", 00:06:36.898 "iscsi_enable_histogram", 00:06:36.898 "iscsi_set_options", 00:06:36.898 "iscsi_get_auth_groups", 00:06:36.898 "iscsi_auth_group_remove_secret", 00:06:36.898 "iscsi_auth_group_add_secret", 00:06:36.898 "iscsi_delete_auth_group", 00:06:36.898 "iscsi_create_auth_group", 00:06:36.898 "iscsi_set_discovery_auth", 00:06:36.898 "iscsi_get_options", 00:06:36.898 "iscsi_target_node_request_logout", 00:06:36.898 "iscsi_target_node_set_redirect", 00:06:36.898 "iscsi_target_node_set_auth", 00:06:36.898 "iscsi_target_node_add_lun", 00:06:36.898 "iscsi_get_stats", 00:06:36.898 "iscsi_get_connections", 00:06:36.898 "iscsi_portal_group_set_auth", 00:06:36.898 "iscsi_start_portal_group", 00:06:36.898 "iscsi_delete_portal_group", 00:06:36.898 "iscsi_create_portal_group", 00:06:36.898 "iscsi_get_portal_groups", 00:06:36.898 "iscsi_delete_target_node", 00:06:36.898 "iscsi_target_node_remove_pg_ig_maps", 00:06:36.898 "iscsi_target_node_add_pg_ig_maps", 00:06:36.898 "iscsi_create_target_node", 00:06:36.898 "iscsi_get_target_nodes", 00:06:36.898 "iscsi_delete_initiator_group", 00:06:36.898 "iscsi_initiator_group_remove_initiators", 00:06:36.898 "iscsi_initiator_group_add_initiators", 00:06:36.898 "iscsi_create_initiator_group", 00:06:36.898 "iscsi_get_initiator_groups", 00:06:36.898 "nvmf_set_crdt", 00:06:36.898 "nvmf_set_config", 00:06:36.898 "nvmf_set_max_subsystems", 00:06:36.898 "nvmf_stop_mdns_prr", 00:06:36.898 "nvmf_publish_mdns_prr", 00:06:36.898 "nvmf_subsystem_get_listeners", 00:06:36.898 "nvmf_subsystem_get_qpairs", 00:06:36.898 "nvmf_subsystem_get_controllers", 00:06:36.898 "nvmf_get_stats", 00:06:36.898 "nvmf_get_transports", 00:06:36.898 "nvmf_create_transport", 00:06:36.898 "nvmf_get_targets", 00:06:36.898 "nvmf_delete_target", 00:06:36.898 "nvmf_create_target", 00:06:36.898 "nvmf_subsystem_allow_any_host", 00:06:36.898 "nvmf_subsystem_set_keys", 00:06:36.898 "nvmf_subsystem_remove_host", 00:06:36.898 "nvmf_subsystem_add_host", 00:06:36.898 "nvmf_ns_remove_host", 00:06:36.898 "nvmf_ns_add_host", 00:06:36.898 "nvmf_subsystem_remove_ns", 00:06:36.898 "nvmf_subsystem_set_ns_ana_group", 00:06:36.898 "nvmf_subsystem_add_ns", 00:06:36.898 "nvmf_subsystem_listener_set_ana_state", 00:06:36.898 "nvmf_discovery_get_referrals", 00:06:36.898 "nvmf_discovery_remove_referral", 00:06:36.898 "nvmf_discovery_add_referral", 00:06:36.898 "nvmf_subsystem_remove_listener", 00:06:36.898 "nvmf_subsystem_add_listener", 00:06:36.898 "nvmf_delete_subsystem", 00:06:36.898 "nvmf_create_subsystem", 00:06:36.898 "nvmf_get_subsystems", 00:06:36.898 "env_dpdk_get_mem_stats", 00:06:36.898 "nbd_get_disks", 00:06:36.898 "nbd_stop_disk", 00:06:36.898 "nbd_start_disk", 00:06:36.898 "ublk_recover_disk", 00:06:36.898 "ublk_get_disks", 00:06:36.898 "ublk_stop_disk", 00:06:36.898 "ublk_start_disk", 00:06:36.898 "ublk_destroy_target", 00:06:36.898 "ublk_create_target", 00:06:36.898 "virtio_blk_create_transport", 00:06:36.898 "virtio_blk_get_transports", 00:06:36.898 "vhost_controller_set_coalescing", 00:06:36.898 "vhost_get_controllers", 00:06:36.898 "vhost_delete_controller", 00:06:36.898 "vhost_create_blk_controller", 00:06:36.898 "vhost_scsi_controller_remove_target", 00:06:36.898 "vhost_scsi_controller_add_target", 00:06:36.898 "vhost_start_scsi_controller", 00:06:36.898 "vhost_create_scsi_controller", 00:06:36.898 "thread_set_cpumask", 00:06:36.898 "scheduler_set_options", 00:06:36.898 "framework_get_governor", 00:06:36.898 "framework_get_scheduler", 00:06:36.898 "framework_set_scheduler", 00:06:36.898 "framework_get_reactors", 00:06:36.898 "thread_get_io_channels", 00:06:36.898 "thread_get_pollers", 00:06:36.898 "thread_get_stats", 00:06:36.898 "framework_monitor_context_switch", 00:06:36.898 "spdk_kill_instance", 00:06:36.898 "log_enable_timestamps", 00:06:36.898 "log_get_flags", 00:06:36.898 "log_clear_flag", 00:06:36.898 "log_set_flag", 00:06:36.898 "log_get_level", 00:06:36.898 "log_set_level", 00:06:36.898 "log_get_print_level", 00:06:36.898 "log_set_print_level", 00:06:36.898 "framework_enable_cpumask_locks", 00:06:36.898 "framework_disable_cpumask_locks", 00:06:36.898 "framework_wait_init", 00:06:36.898 "framework_start_init", 00:06:36.898 "scsi_get_devices", 00:06:36.898 "bdev_get_histogram", 00:06:36.898 "bdev_enable_histogram", 00:06:36.898 "bdev_set_qos_limit", 00:06:36.898 "bdev_set_qd_sampling_period", 00:06:36.898 "bdev_get_bdevs", 00:06:36.898 "bdev_reset_iostat", 00:06:36.898 "bdev_get_iostat", 00:06:36.898 "bdev_examine", 00:06:36.898 "bdev_wait_for_examine", 00:06:36.898 "bdev_set_options", 00:06:36.898 "accel_get_stats", 00:06:36.898 "accel_set_options", 00:06:36.898 "accel_set_driver", 00:06:36.898 "accel_crypto_key_destroy", 00:06:36.898 "accel_crypto_keys_get", 00:06:36.898 "accel_crypto_key_create", 00:06:36.898 "accel_assign_opc", 00:06:36.898 "accel_get_module_info", 00:06:36.898 "accel_get_opc_assignments", 00:06:36.898 "vmd_rescan", 00:06:36.898 "vmd_remove_device", 00:06:36.898 "vmd_enable", 00:06:36.898 "sock_get_default_impl", 00:06:36.898 "sock_set_default_impl", 00:06:36.898 "sock_impl_set_options", 00:06:36.898 "sock_impl_get_options", 00:06:36.899 "iobuf_get_stats", 00:06:36.899 "iobuf_set_options", 00:06:36.899 "keyring_get_keys", 00:06:36.899 "vfu_tgt_set_base_path", 00:06:36.899 "framework_get_pci_devices", 00:06:36.899 "framework_get_config", 00:06:36.899 "framework_get_subsystems", 00:06:36.899 "fsdev_set_opts", 00:06:36.899 "fsdev_get_opts", 00:06:36.899 "trace_get_info", 00:06:36.899 "trace_get_tpoint_group_mask", 00:06:36.899 "trace_disable_tpoint_group", 00:06:36.899 "trace_enable_tpoint_group", 00:06:36.899 "trace_clear_tpoint_mask", 00:06:36.899 "trace_set_tpoint_mask", 00:06:36.899 "notify_get_notifications", 00:06:36.899 "notify_get_types", 00:06:36.899 "spdk_get_version", 00:06:36.899 "rpc_get_methods" 00:06:36.899 ] 00:06:36.899 01:17:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:36.899 01:17:22 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:36.899 01:17:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.157 01:17:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:37.157 01:17:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1467128 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1467128 ']' 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1467128 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467128 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467128' 00:06:37.157 killing process with pid 1467128 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1467128 00:06:37.157 01:17:22 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1467128 00:06:37.415 00:06:37.415 real 0m1.312s 00:06:37.415 user 0m2.372s 00:06:37.415 sys 0m0.465s 00:06:37.415 01:17:22 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.415 01:17:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.416 ************************************ 00:06:37.416 END TEST spdkcli_tcp 00:06:37.416 ************************************ 00:06:37.416 01:17:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:37.416 01:17:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.416 01:17:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.416 01:17:22 -- common/autotest_common.sh@10 -- # set +x 00:06:37.416 ************************************ 00:06:37.416 START TEST dpdk_mem_utility 00:06:37.416 ************************************ 00:06:37.416 01:17:22 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:37.674 * Looking for test storage... 00:06:37.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:37.674 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:37.674 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:37.674 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:37.674 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.674 01:17:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:37.674 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.674 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:37.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.674 --rc genhtml_branch_coverage=1 00:06:37.674 --rc genhtml_function_coverage=1 00:06:37.674 --rc genhtml_legend=1 00:06:37.674 --rc geninfo_all_blocks=1 00:06:37.674 --rc geninfo_unexecuted_blocks=1 00:06:37.674 00:06:37.674 ' 00:06:37.674 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:37.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.674 --rc genhtml_branch_coverage=1 00:06:37.674 --rc genhtml_function_coverage=1 00:06:37.674 --rc genhtml_legend=1 00:06:37.674 --rc geninfo_all_blocks=1 00:06:37.674 --rc geninfo_unexecuted_blocks=1 00:06:37.674 00:06:37.674 ' 00:06:37.674 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:37.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.674 --rc genhtml_branch_coverage=1 00:06:37.674 --rc genhtml_function_coverage=1 00:06:37.674 --rc genhtml_legend=1 00:06:37.674 --rc geninfo_all_blocks=1 00:06:37.674 --rc geninfo_unexecuted_blocks=1 00:06:37.674 00:06:37.674 ' 00:06:37.674 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:37.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.674 --rc genhtml_branch_coverage=1 00:06:37.674 --rc genhtml_function_coverage=1 00:06:37.674 --rc genhtml_legend=1 00:06:37.674 --rc geninfo_all_blocks=1 00:06:37.674 --rc geninfo_unexecuted_blocks=1 00:06:37.674 00:06:37.674 ' 00:06:37.674 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:37.674 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1467338 00:06:37.674 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:37.675 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1467338 00:06:37.675 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1467338 ']' 00:06:37.675 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.675 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.675 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.675 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.675 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.675 [2024-10-13 01:17:23.172742] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:37.675 [2024-10-13 01:17:23.172844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467338 ] 00:06:37.675 [2024-10-13 01:17:23.229326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.933 [2024-10-13 01:17:23.278550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.191 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.191 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:38.191 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:38.191 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:38.191 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.191 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:38.191 { 00:06:38.191 "filename": "/tmp/spdk_mem_dump.txt" 00:06:38.191 } 00:06:38.191 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.191 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:38.191 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:38.191 1 heaps totaling size 810.000000 MiB 00:06:38.191 size: 810.000000 MiB heap id: 0 00:06:38.191 end heaps---------- 00:06:38.191 9 mempools totaling size 595.772034 MiB 00:06:38.191 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:38.191 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:38.191 size: 92.545471 MiB name: bdev_io_1467338 00:06:38.191 size: 50.003479 MiB name: msgpool_1467338 00:06:38.191 size: 36.509338 MiB name: fsdev_io_1467338 00:06:38.191 size: 21.763794 MiB name: PDU_Pool 00:06:38.191 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:38.191 size: 4.133484 MiB name: evtpool_1467338 00:06:38.191 size: 0.026123 MiB name: Session_Pool 00:06:38.191 end mempools------- 00:06:38.191 6 memzones totaling size 4.142822 MiB 00:06:38.191 size: 1.000366 MiB name: RG_ring_0_1467338 00:06:38.191 size: 1.000366 MiB name: RG_ring_1_1467338 00:06:38.191 size: 1.000366 MiB name: RG_ring_4_1467338 00:06:38.191 size: 1.000366 MiB name: RG_ring_5_1467338 00:06:38.191 size: 0.125366 MiB name: RG_ring_2_1467338 00:06:38.191 size: 0.015991 MiB name: RG_ring_3_1467338 00:06:38.191 end memzones------- 00:06:38.191 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:38.191 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:38.191 list of free elements. size: 10.862488 MiB 00:06:38.191 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:38.191 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:38.191 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:38.191 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:38.191 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:38.191 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:38.192 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:38.192 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:38.192 element at address: 0x20001a600000 with size: 0.582886 MiB 00:06:38.192 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:38.192 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:38.192 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:38.192 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:38.192 element at address: 0x200027a00000 with size: 0.410034 MiB 00:06:38.192 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:38.192 list of standard malloc elements. size: 199.218628 MiB 00:06:38.192 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:38.192 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:38.192 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:38.192 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:38.192 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:38.192 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:38.192 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:38.192 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:38.192 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:38.192 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:38.192 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:38.192 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200027a69040 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:38.192 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:38.192 list of memzone associated elements. size: 599.918884 MiB 00:06:38.192 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:38.192 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:38.192 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:38.192 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:38.192 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:38.192 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1467338_0 00:06:38.192 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:38.192 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1467338_0 00:06:38.192 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:38.192 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1467338_0 00:06:38.192 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:38.192 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:38.192 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:38.192 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:38.192 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:38.192 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1467338_0 00:06:38.192 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:38.192 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1467338 00:06:38.192 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:38.192 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1467338 00:06:38.192 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:38.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:38.192 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:38.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:38.192 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:38.192 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:38.192 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:38.192 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:38.192 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:38.192 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1467338 00:06:38.192 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:38.192 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1467338 00:06:38.192 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:38.192 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1467338 00:06:38.192 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:38.192 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1467338 00:06:38.192 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:38.192 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1467338 00:06:38.192 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:38.192 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1467338 00:06:38.192 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:38.192 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:38.192 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:38.192 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:38.192 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:38.192 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:38.192 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:38.192 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1467338 00:06:38.192 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:38.192 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1467338 00:06:38.192 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:38.192 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:38.192 element at address: 0x200027a69100 with size: 0.023743 MiB 00:06:38.192 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:38.192 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:38.192 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1467338 00:06:38.192 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:06:38.192 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:38.192 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:38.192 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1467338 00:06:38.192 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:38.192 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1467338 00:06:38.192 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:38.192 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1467338 00:06:38.192 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:06:38.192 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:38.192 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:38.192 01:17:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1467338 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1467338 ']' 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1467338 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467338 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467338' 00:06:38.192 killing process with pid 1467338 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1467338 00:06:38.192 01:17:23 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1467338 00:06:38.758 00:06:38.758 real 0m1.141s 00:06:38.758 user 0m1.118s 00:06:38.758 sys 0m0.443s 00:06:38.758 01:17:24 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.758 01:17:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:38.758 ************************************ 00:06:38.758 END TEST dpdk_mem_utility 00:06:38.758 ************************************ 00:06:38.758 01:17:24 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:38.758 01:17:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.758 01:17:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.758 01:17:24 -- common/autotest_common.sh@10 -- # set +x 00:06:38.758 ************************************ 00:06:38.758 START TEST event 00:06:38.758 ************************************ 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:38.758 * Looking for test storage... 00:06:38.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:38.758 01:17:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.758 01:17:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.758 01:17:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.758 01:17:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.758 01:17:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.758 01:17:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.758 01:17:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.758 01:17:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.758 01:17:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.758 01:17:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.758 01:17:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.758 01:17:24 event -- scripts/common.sh@344 -- # case "$op" in 00:06:38.758 01:17:24 event -- scripts/common.sh@345 -- # : 1 00:06:38.758 01:17:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.758 01:17:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.758 01:17:24 event -- scripts/common.sh@365 -- # decimal 1 00:06:38.758 01:17:24 event -- scripts/common.sh@353 -- # local d=1 00:06:38.758 01:17:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.758 01:17:24 event -- scripts/common.sh@355 -- # echo 1 00:06:38.758 01:17:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.758 01:17:24 event -- scripts/common.sh@366 -- # decimal 2 00:06:38.758 01:17:24 event -- scripts/common.sh@353 -- # local d=2 00:06:38.758 01:17:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.758 01:17:24 event -- scripts/common.sh@355 -- # echo 2 00:06:38.758 01:17:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.758 01:17:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.758 01:17:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.758 01:17:24 event -- scripts/common.sh@368 -- # return 0 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:38.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.758 --rc genhtml_branch_coverage=1 00:06:38.758 --rc genhtml_function_coverage=1 00:06:38.758 --rc genhtml_legend=1 00:06:38.758 --rc geninfo_all_blocks=1 00:06:38.758 --rc geninfo_unexecuted_blocks=1 00:06:38.758 00:06:38.758 ' 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:38.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.758 --rc genhtml_branch_coverage=1 00:06:38.758 --rc genhtml_function_coverage=1 00:06:38.758 --rc genhtml_legend=1 00:06:38.758 --rc geninfo_all_blocks=1 00:06:38.758 --rc geninfo_unexecuted_blocks=1 00:06:38.758 00:06:38.758 ' 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:38.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.758 --rc genhtml_branch_coverage=1 00:06:38.758 --rc genhtml_function_coverage=1 00:06:38.758 --rc genhtml_legend=1 00:06:38.758 --rc geninfo_all_blocks=1 00:06:38.758 --rc geninfo_unexecuted_blocks=1 00:06:38.758 00:06:38.758 ' 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:38.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.758 --rc genhtml_branch_coverage=1 00:06:38.758 --rc genhtml_function_coverage=1 00:06:38.758 --rc genhtml_legend=1 00:06:38.758 --rc geninfo_all_blocks=1 00:06:38.758 --rc geninfo_unexecuted_blocks=1 00:06:38.758 00:06:38.758 ' 00:06:38.758 01:17:24 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:38.758 01:17:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:38.758 01:17:24 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:38.758 01:17:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.758 01:17:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.758 ************************************ 00:06:38.758 START TEST event_perf 00:06:38.758 ************************************ 00:06:38.758 01:17:24 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:39.016 Running I/O for 1 seconds...[2024-10-13 01:17:24.343725] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:39.016 [2024-10-13 01:17:24.343839] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467535 ] 00:06:39.016 [2024-10-13 01:17:24.406757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.016 [2024-10-13 01:17:24.460175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.016 [2024-10-13 01:17:24.460243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.016 [2024-10-13 01:17:24.460341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.016 [2024-10-13 01:17:24.460344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.949 Running I/O for 1 seconds... 00:06:39.949 lcore 0: 233217 00:06:39.949 lcore 1: 233215 00:06:39.949 lcore 2: 233216 00:06:39.949 lcore 3: 233216 00:06:39.949 done. 00:06:39.949 00:06:39.950 real 0m1.176s 00:06:39.950 user 0m4.100s 00:06:39.950 sys 0m0.071s 00:06:39.950 01:17:25 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.950 01:17:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.950 ************************************ 00:06:39.950 END TEST event_perf 00:06:39.950 ************************************ 00:06:40.208 01:17:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:40.208 01:17:25 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:40.208 01:17:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.208 01:17:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.208 ************************************ 00:06:40.208 START TEST event_reactor 00:06:40.208 ************************************ 00:06:40.208 01:17:25 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:40.208 [2024-10-13 01:17:25.573556] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:40.208 [2024-10-13 01:17:25.573622] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467696 ] 00:06:40.208 [2024-10-13 01:17:25.640483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.208 [2024-10-13 01:17:25.689566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.581 test_start 00:06:41.582 oneshot 00:06:41.582 tick 100 00:06:41.582 tick 100 00:06:41.582 tick 250 00:06:41.582 tick 100 00:06:41.582 tick 100 00:06:41.582 tick 100 00:06:41.582 tick 250 00:06:41.582 tick 500 00:06:41.582 tick 100 00:06:41.582 tick 100 00:06:41.582 tick 250 00:06:41.582 tick 100 00:06:41.582 tick 100 00:06:41.582 test_end 00:06:41.582 00:06:41.582 real 0m1.178s 00:06:41.582 user 0m1.099s 00:06:41.582 sys 0m0.072s 00:06:41.582 01:17:26 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.582 01:17:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:41.582 ************************************ 00:06:41.582 END TEST event_reactor 00:06:41.582 ************************************ 00:06:41.582 01:17:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.582 01:17:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:41.582 01:17:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.582 01:17:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.582 ************************************ 00:06:41.582 START TEST event_reactor_perf 00:06:41.582 ************************************ 00:06:41.582 01:17:26 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.582 [2024-10-13 01:17:26.795960] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:41.582 [2024-10-13 01:17:26.796012] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467854 ] 00:06:41.582 [2024-10-13 01:17:26.856031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.582 [2024-10-13 01:17:26.906238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.515 test_start 00:06:42.515 test_end 00:06:42.515 Performance: 356434 events per second 00:06:42.515 00:06:42.515 real 0m1.169s 00:06:42.515 user 0m1.093s 00:06:42.515 sys 0m0.072s 00:06:42.515 01:17:27 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.515 01:17:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:42.515 ************************************ 00:06:42.515 END TEST event_reactor_perf 00:06:42.515 ************************************ 00:06:42.515 01:17:27 event -- event/event.sh@49 -- # uname -s 00:06:42.515 01:17:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:42.515 01:17:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:42.515 01:17:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.515 01:17:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.515 01:17:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:42.515 ************************************ 00:06:42.515 START TEST event_scheduler 00:06:42.515 ************************************ 00:06:42.515 01:17:28 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:42.515 * Looking for test storage... 00:06:42.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:42.515 01:17:28 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:42.515 01:17:28 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:42.515 01:17:28 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:42.772 01:17:28 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.772 01:17:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.773 01:17:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:42.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.773 --rc genhtml_branch_coverage=1 00:06:42.773 --rc genhtml_function_coverage=1 00:06:42.773 --rc genhtml_legend=1 00:06:42.773 --rc geninfo_all_blocks=1 00:06:42.773 --rc geninfo_unexecuted_blocks=1 00:06:42.773 00:06:42.773 ' 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:42.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.773 --rc genhtml_branch_coverage=1 00:06:42.773 --rc genhtml_function_coverage=1 00:06:42.773 --rc genhtml_legend=1 00:06:42.773 --rc geninfo_all_blocks=1 00:06:42.773 --rc geninfo_unexecuted_blocks=1 00:06:42.773 00:06:42.773 ' 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:42.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.773 --rc genhtml_branch_coverage=1 00:06:42.773 --rc genhtml_function_coverage=1 00:06:42.773 --rc genhtml_legend=1 00:06:42.773 --rc geninfo_all_blocks=1 00:06:42.773 --rc geninfo_unexecuted_blocks=1 00:06:42.773 00:06:42.773 ' 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:42.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.773 --rc genhtml_branch_coverage=1 00:06:42.773 --rc genhtml_function_coverage=1 00:06:42.773 --rc genhtml_legend=1 00:06:42.773 --rc geninfo_all_blocks=1 00:06:42.773 --rc geninfo_unexecuted_blocks=1 00:06:42.773 00:06:42.773 ' 00:06:42.773 01:17:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:42.773 01:17:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1468157 00:06:42.773 01:17:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:42.773 01:17:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:42.773 01:17:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1468157 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1468157 ']' 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.773 01:17:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.773 [2024-10-13 01:17:28.194101] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:42.773 [2024-10-13 01:17:28.194185] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468157 ] 00:06:42.773 [2024-10-13 01:17:28.251758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.773 [2024-10-13 01:17:28.302636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.773 [2024-10-13 01:17:28.302694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.773 [2024-10-13 01:17:28.302760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.773 [2024-10-13 01:17:28.302763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:43.031 01:17:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 [2024-10-13 01:17:28.447832] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:43.031 [2024-10-13 01:17:28.447873] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:43.031 [2024-10-13 01:17:28.447891] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:43.031 [2024-10-13 01:17:28.447901] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:43.031 [2024-10-13 01:17:28.447911] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.031 01:17:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 [2024-10-13 01:17:28.543768] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.031 01:17:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.031 01:17:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 ************************************ 00:06:43.031 START TEST scheduler_create_thread 00:06:43.031 ************************************ 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 2 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 3 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 4 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.031 5 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.031 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.289 6 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.289 7 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.289 8 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:43.289 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.290 9 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.290 10 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.290 01:17:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.856 01:17:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.856 00:06:43.856 real 0m0.589s 00:06:43.856 user 0m0.006s 00:06:43.856 sys 0m0.007s 00:06:43.856 01:17:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.856 01:17:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.856 ************************************ 00:06:43.856 END TEST scheduler_create_thread 00:06:43.856 ************************************ 00:06:43.856 01:17:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:43.856 01:17:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1468157 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1468157 ']' 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1468157 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468157 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468157' 00:06:43.856 killing process with pid 1468157 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1468157 00:06:43.856 01:17:29 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1468157 00:06:44.114 [2024-10-13 01:17:29.643898] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:44.372 00:06:44.372 real 0m1.827s 00:06:44.372 user 0m2.621s 00:06:44.372 sys 0m0.340s 00:06:44.372 01:17:29 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.372 01:17:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.372 ************************************ 00:06:44.372 END TEST event_scheduler 00:06:44.372 ************************************ 00:06:44.372 01:17:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:44.372 01:17:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:44.372 01:17:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.372 01:17:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.372 01:17:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.372 ************************************ 00:06:44.372 START TEST app_repeat 00:06:44.372 ************************************ 00:06:44.372 01:17:29 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1468355 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1468355' 00:06:44.372 Process app_repeat pid: 1468355 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:44.372 spdk_app_start Round 0 00:06:44.372 01:17:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1468355 /var/tmp/spdk-nbd.sock 00:06:44.372 01:17:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1468355 ']' 00:06:44.372 01:17:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.372 01:17:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.372 01:17:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.372 01:17:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.372 01:17:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.372 [2024-10-13 01:17:29.900974] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:06:44.372 [2024-10-13 01:17:29.901040] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468355 ] 00:06:44.630 [2024-10-13 01:17:29.959372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.630 [2024-10-13 01:17:30.010872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.630 [2024-10-13 01:17:30.010877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.630 01:17:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.630 01:17:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:44.630 01:17:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.889 Malloc0 00:06:45.146 01:17:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.405 Malloc1 00:06:45.405 01:17:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.405 01:17:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.663 /dev/nbd0 00:06:45.663 01:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.663 01:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.663 1+0 records in 00:06:45.663 1+0 records out 00:06:45.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227838 s, 18.0 MB/s 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.663 01:17:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:45.663 01:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.663 01:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.663 01:17:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.921 /dev/nbd1 00:06:45.921 01:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.921 01:17:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.921 1+0 records in 00:06:45.921 1+0 records out 00:06:45.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195054 s, 21.0 MB/s 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.921 01:17:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:45.921 01:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.921 01:17:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.921 01:17:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.921 01:17:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.921 01:17:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.179 { 00:06:46.179 "nbd_device": "/dev/nbd0", 00:06:46.179 "bdev_name": "Malloc0" 00:06:46.179 }, 00:06:46.179 { 00:06:46.179 "nbd_device": "/dev/nbd1", 00:06:46.179 "bdev_name": "Malloc1" 00:06:46.179 } 00:06:46.179 ]' 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.179 { 00:06:46.179 "nbd_device": "/dev/nbd0", 00:06:46.179 "bdev_name": "Malloc0" 00:06:46.179 }, 00:06:46.179 { 00:06:46.179 "nbd_device": "/dev/nbd1", 00:06:46.179 "bdev_name": "Malloc1" 00:06:46.179 } 00:06:46.179 ]' 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.179 /dev/nbd1' 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.179 /dev/nbd1' 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.179 01:17:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.180 01:17:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.180 256+0 records in 00:06:46.180 256+0 records out 00:06:46.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543629 s, 193 MB/s 00:06:46.180 01:17:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.180 01:17:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.438 256+0 records in 00:06:46.438 256+0 records out 00:06:46.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228093 s, 46.0 MB/s 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.438 256+0 records in 00:06:46.438 256+0 records out 00:06:46.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024453 s, 42.9 MB/s 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.438 01:17:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.696 01:17:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.954 01:17:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.212 01:17:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.212 01:17:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.470 01:17:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.728 [2024-10-13 01:17:33.194356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.728 [2024-10-13 01:17:33.241199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.728 [2024-10-13 01:17:33.241200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.728 [2024-10-13 01:17:33.302544] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.728 [2024-10-13 01:17:33.302614] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.010 01:17:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.010 01:17:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:51.010 spdk_app_start Round 1 00:06:51.010 01:17:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1468355 /var/tmp/spdk-nbd.sock 00:06:51.010 01:17:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1468355 ']' 00:06:51.010 01:17:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.010 01:17:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.010 01:17:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.010 01:17:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.010 01:17:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.010 01:17:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.010 01:17:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:51.010 01:17:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.010 Malloc0 00:06:51.010 01:17:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.269 Malloc1 00:06:51.269 01:17:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.269 01:17:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.835 /dev/nbd0 00:06:51.835 01:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.835 01:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.835 1+0 records in 00:06:51.835 1+0 records out 00:06:51.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253349 s, 16.2 MB/s 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.835 01:17:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.835 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.835 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.835 01:17:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.093 /dev/nbd1 00:06:52.093 01:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.093 01:17:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.093 1+0 records in 00:06:52.093 1+0 records out 00:06:52.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165892 s, 24.7 MB/s 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.093 01:17:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.093 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.093 01:17:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.093 01:17:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.093 01:17:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.093 01:17:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.352 { 00:06:52.352 "nbd_device": "/dev/nbd0", 00:06:52.352 "bdev_name": "Malloc0" 00:06:52.352 }, 00:06:52.352 { 00:06:52.352 "nbd_device": "/dev/nbd1", 00:06:52.352 "bdev_name": "Malloc1" 00:06:52.352 } 00:06:52.352 ]' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.352 { 00:06:52.352 "nbd_device": "/dev/nbd0", 00:06:52.352 "bdev_name": "Malloc0" 00:06:52.352 }, 00:06:52.352 { 00:06:52.352 "nbd_device": "/dev/nbd1", 00:06:52.352 "bdev_name": "Malloc1" 00:06:52.352 } 00:06:52.352 ]' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:52.352 /dev/nbd1' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:52.352 /dev/nbd1' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:52.352 256+0 records in 00:06:52.352 256+0 records out 00:06:52.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526541 s, 199 MB/s 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:52.352 256+0 records in 00:06:52.352 256+0 records out 00:06:52.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023352 s, 44.9 MB/s 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:52.352 256+0 records in 00:06:52.352 256+0 records out 00:06:52.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241166 s, 43.5 MB/s 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.352 01:17:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.611 01:17:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.177 01:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.435 01:17:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.435 01:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.435 01:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.435 01:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.435 01:17:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.435 01:17:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.435 01:17:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.435 01:17:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.435 01:17:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.435 01:17:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:53.693 01:17:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.951 [2024-10-13 01:17:39.272842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.951 [2024-10-13 01:17:39.320303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.951 [2024-10-13 01:17:39.320303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.951 [2024-10-13 01:17:39.379395] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.951 [2024-10-13 01:17:39.379530] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.232 01:17:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.232 01:17:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:57.232 spdk_app_start Round 2 00:06:57.232 01:17:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1468355 /var/tmp/spdk-nbd.sock 00:06:57.232 01:17:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1468355 ']' 00:06:57.232 01:17:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.232 01:17:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.232 01:17:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.232 01:17:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.232 01:17:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.232 01:17:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.232 01:17:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:57.232 01:17:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.232 Malloc0 00:06:57.232 01:17:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.490 Malloc1 00:06:57.490 01:17:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.490 01:17:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.490 01:17:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.490 01:17:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:57.490 01:17:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.490 01:17:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:57.490 01:17:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:57.490 01:17:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.491 01:17:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.491 01:17:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:57.491 01:17:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.491 01:17:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:57.491 01:17:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:57.491 01:17:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:57.491 01:17:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.491 01:17:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:57.749 /dev/nbd0 00:06:57.749 01:17:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.749 01:17:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.749 1+0 records in 00:06:57.749 1+0 records out 00:06:57.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174547 s, 23.5 MB/s 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.749 01:17:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:57.749 01:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.749 01:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.749 01:17:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.007 /dev/nbd1 00:06:58.007 01:17:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.007 01:17:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.007 1+0 records in 00:06:58.007 1+0 records out 00:06:58.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218428 s, 18.8 MB/s 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.007 01:17:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:58.007 01:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.007 01:17:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.007 01:17:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.007 01:17:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.007 01:17:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.265 01:17:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.265 { 00:06:58.265 "nbd_device": "/dev/nbd0", 00:06:58.265 "bdev_name": "Malloc0" 00:06:58.265 }, 00:06:58.265 { 00:06:58.265 "nbd_device": "/dev/nbd1", 00:06:58.265 "bdev_name": "Malloc1" 00:06:58.265 } 00:06:58.265 ]' 00:06:58.265 01:17:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.265 { 00:06:58.265 "nbd_device": "/dev/nbd0", 00:06:58.265 "bdev_name": "Malloc0" 00:06:58.265 }, 00:06:58.265 { 00:06:58.265 "nbd_device": "/dev/nbd1", 00:06:58.265 "bdev_name": "Malloc1" 00:06:58.265 } 00:06:58.265 ]' 00:06:58.265 01:17:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.523 /dev/nbd1' 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.523 /dev/nbd1' 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.523 256+0 records in 00:06:58.523 256+0 records out 00:06:58.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526903 s, 199 MB/s 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.523 256+0 records in 00:06:58.523 256+0 records out 00:06:58.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206565 s, 50.8 MB/s 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.523 256+0 records in 00:06:58.523 256+0 records out 00:06:58.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025301 s, 41.4 MB/s 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.523 01:17:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.781 01:17:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.043 01:17:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.332 01:17:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.332 01:17:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.591 01:17:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.849 [2024-10-13 01:17:45.332648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.849 [2024-10-13 01:17:45.379881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.849 [2024-10-13 01:17:45.379887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.107 [2024-10-13 01:17:45.441130] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.107 [2024-10-13 01:17:45.441202] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:02.632 01:17:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1468355 /var/tmp/spdk-nbd.sock 00:07:02.632 01:17:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1468355 ']' 00:07:02.632 01:17:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.633 01:17:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.633 01:17:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.633 01:17:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.633 01:17:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.890 01:17:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.890 01:17:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:02.890 01:17:48 event.app_repeat -- event/event.sh@39 -- # killprocess 1468355 00:07:02.890 01:17:48 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1468355 ']' 00:07:02.890 01:17:48 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1468355 00:07:02.890 01:17:48 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:02.890 01:17:48 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.890 01:17:48 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1468355 00:07:02.890 01:17:48 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.148 01:17:48 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.148 01:17:48 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1468355' 00:07:03.148 killing process with pid 1468355 00:07:03.148 01:17:48 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1468355 00:07:03.148 01:17:48 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1468355 00:07:03.148 spdk_app_start is called in Round 0. 00:07:03.148 Shutdown signal received, stop current app iteration 00:07:03.148 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 reinitialization... 00:07:03.148 spdk_app_start is called in Round 1. 00:07:03.148 Shutdown signal received, stop current app iteration 00:07:03.148 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 reinitialization... 00:07:03.148 spdk_app_start is called in Round 2. 00:07:03.148 Shutdown signal received, stop current app iteration 00:07:03.148 Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 reinitialization... 00:07:03.148 spdk_app_start is called in Round 3. 00:07:03.148 Shutdown signal received, stop current app iteration 00:07:03.148 01:17:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:03.148 01:17:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:03.148 00:07:03.148 real 0m18.752s 00:07:03.148 user 0m41.597s 00:07:03.148 sys 0m3.216s 00:07:03.148 01:17:48 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.148 01:17:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.148 ************************************ 00:07:03.148 END TEST app_repeat 00:07:03.148 ************************************ 00:07:03.148 01:17:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:03.148 01:17:48 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:03.148 01:17:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.148 01:17:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.148 01:17:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.148 ************************************ 00:07:03.148 START TEST cpu_locks 00:07:03.148 ************************************ 00:07:03.148 01:17:48 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:03.407 * Looking for test storage... 00:07:03.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:03.407 01:17:48 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:03.407 01:17:48 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:03.407 01:17:48 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:03.407 01:17:48 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.407 01:17:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:03.407 01:17:48 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.407 01:17:48 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:03.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.407 --rc genhtml_branch_coverage=1 00:07:03.407 --rc genhtml_function_coverage=1 00:07:03.407 --rc genhtml_legend=1 00:07:03.407 --rc geninfo_all_blocks=1 00:07:03.407 --rc geninfo_unexecuted_blocks=1 00:07:03.407 00:07:03.407 ' 00:07:03.407 01:17:48 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:03.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.407 --rc genhtml_branch_coverage=1 00:07:03.407 --rc genhtml_function_coverage=1 00:07:03.407 --rc genhtml_legend=1 00:07:03.407 --rc geninfo_all_blocks=1 00:07:03.407 --rc geninfo_unexecuted_blocks=1 00:07:03.407 00:07:03.407 ' 00:07:03.407 01:17:48 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:03.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.407 --rc genhtml_branch_coverage=1 00:07:03.407 --rc genhtml_function_coverage=1 00:07:03.407 --rc genhtml_legend=1 00:07:03.407 --rc geninfo_all_blocks=1 00:07:03.407 --rc geninfo_unexecuted_blocks=1 00:07:03.407 00:07:03.407 ' 00:07:03.407 01:17:48 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:03.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.407 --rc genhtml_branch_coverage=1 00:07:03.407 --rc genhtml_function_coverage=1 00:07:03.408 --rc genhtml_legend=1 00:07:03.408 --rc geninfo_all_blocks=1 00:07:03.408 --rc geninfo_unexecuted_blocks=1 00:07:03.408 00:07:03.408 ' 00:07:03.408 01:17:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:03.408 01:17:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:03.408 01:17:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:03.408 01:17:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:03.408 01:17:48 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.408 01:17:48 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.408 01:17:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.408 ************************************ 00:07:03.408 START TEST default_locks 00:07:03.408 ************************************ 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1470842 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1470842 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1470842 ']' 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.408 01:17:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.408 [2024-10-13 01:17:48.905880] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:03.408 [2024-10-13 01:17:48.905962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470842 ] 00:07:03.408 [2024-10-13 01:17:48.964156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.665 [2024-10-13 01:17:49.011149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.923 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.923 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:03.923 01:17:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1470842 00:07:03.923 01:17:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1470842 00:07:03.923 01:17:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.180 lslocks: write error 00:07:04.180 01:17:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1470842 00:07:04.180 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1470842 ']' 00:07:04.180 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1470842 00:07:04.180 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:04.180 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.180 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1470842 00:07:04.181 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.181 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.181 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1470842' 00:07:04.181 killing process with pid 1470842 00:07:04.181 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1470842 00:07:04.181 01:17:49 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1470842 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1470842 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1470842 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1470842 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1470842 ']' 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1470842) - No such process 00:07:04.747 ERROR: process (pid: 1470842) is no longer running 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:04.747 00:07:04.747 real 0m1.203s 00:07:04.747 user 0m1.159s 00:07:04.747 sys 0m0.537s 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.747 01:17:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.747 ************************************ 00:07:04.747 END TEST default_locks 00:07:04.747 ************************************ 00:07:04.747 01:17:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:04.747 01:17:50 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.747 01:17:50 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.747 01:17:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.747 ************************************ 00:07:04.747 START TEST default_locks_via_rpc 00:07:04.747 ************************************ 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1471012 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1471012 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1471012 ']' 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.747 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.747 [2024-10-13 01:17:50.162677] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:04.747 [2024-10-13 01:17:50.162776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471012 ] 00:07:04.747 [2024-10-13 01:17:50.226165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.747 [2024-10-13 01:17:50.275144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1471012 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1471012 00:07:05.006 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1471012 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1471012 ']' 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1471012 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471012 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471012' 00:07:05.263 killing process with pid 1471012 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1471012 00:07:05.263 01:17:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1471012 00:07:05.829 00:07:05.829 real 0m1.071s 00:07:05.829 user 0m1.017s 00:07:05.829 sys 0m0.524s 00:07:05.829 01:17:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.829 01:17:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.829 ************************************ 00:07:05.829 END TEST default_locks_via_rpc 00:07:05.829 ************************************ 00:07:05.829 01:17:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:05.829 01:17:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.829 01:17:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.829 01:17:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.829 ************************************ 00:07:05.829 START TEST non_locking_app_on_locked_coremask 00:07:05.829 ************************************ 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1471172 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1471172 /var/tmp/spdk.sock 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1471172 ']' 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.829 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.829 [2024-10-13 01:17:51.283412] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:05.829 [2024-10-13 01:17:51.283531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471172 ] 00:07:05.829 [2024-10-13 01:17:51.340854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.829 [2024-10-13 01:17:51.389430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.087 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1471175 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1471175 /var/tmp/spdk2.sock 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1471175 ']' 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.345 01:17:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.345 [2024-10-13 01:17:51.722346] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:06.345 [2024-10-13 01:17:51.722439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471175 ] 00:07:06.345 [2024-10-13 01:17:51.816125] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.345 [2024-10-13 01:17:51.816159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.345 [2024-10-13 01:17:51.913009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.913 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.913 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:06.913 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1471172 00:07:06.913 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1471172 00:07:06.913 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.170 lslocks: write error 00:07:07.170 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1471172 00:07:07.170 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1471172 ']' 00:07:07.170 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1471172 00:07:07.170 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:07.170 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.170 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471172 00:07:07.428 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.428 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.428 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471172' 00:07:07.428 killing process with pid 1471172 00:07:07.428 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1471172 00:07:07.428 01:17:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1471172 00:07:07.994 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1471175 00:07:07.994 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1471175 ']' 00:07:07.994 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1471175 00:07:07.994 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:07.994 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.994 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471175 00:07:08.251 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.252 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.252 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471175' 00:07:08.252 killing process with pid 1471175 00:07:08.252 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1471175 00:07:08.252 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1471175 00:07:08.509 00:07:08.509 real 0m2.761s 00:07:08.509 user 0m2.728s 00:07:08.509 sys 0m1.002s 00:07:08.509 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.510 01:17:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.510 ************************************ 00:07:08.510 END TEST non_locking_app_on_locked_coremask 00:07:08.510 ************************************ 00:07:08.510 01:17:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:08.510 01:17:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.510 01:17:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.510 01:17:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.510 ************************************ 00:07:08.510 START TEST locking_app_on_unlocked_coremask 00:07:08.510 ************************************ 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1471576 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1471576 /var/tmp/spdk.sock 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1471576 ']' 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.510 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.768 [2024-10-13 01:17:54.096640] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:08.768 [2024-10-13 01:17:54.096738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471576 ] 00:07:08.768 [2024-10-13 01:17:54.152904] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.768 [2024-10-13 01:17:54.152947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.768 [2024-10-13 01:17:54.200225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1471604 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1471604 /var/tmp/spdk2.sock 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1471604 ']' 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.026 01:17:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.026 [2024-10-13 01:17:54.522917] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:09.026 [2024-10-13 01:17:54.523010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471604 ] 00:07:09.284 [2024-10-13 01:17:54.613537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.284 [2024-10-13 01:17:54.710269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.217 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.217 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:10.217 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1471604 00:07:10.217 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1471604 00:07:10.217 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.476 lslocks: write error 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1471576 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1471576 ']' 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1471576 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471576 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471576' 00:07:10.476 killing process with pid 1471576 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1471576 00:07:10.476 01:17:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1471576 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1471604 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1471604 ']' 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1471604 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471604 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471604' 00:07:11.409 killing process with pid 1471604 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1471604 00:07:11.409 01:17:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1471604 00:07:11.667 00:07:11.667 real 0m3.066s 00:07:11.667 user 0m3.271s 00:07:11.667 sys 0m1.020s 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.667 ************************************ 00:07:11.667 END TEST locking_app_on_unlocked_coremask 00:07:11.667 ************************************ 00:07:11.667 01:17:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:11.667 01:17:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.667 01:17:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.667 01:17:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.667 ************************************ 00:07:11.667 START TEST locking_app_on_locked_coremask 00:07:11.667 ************************************ 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1471913 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1471913 /var/tmp/spdk.sock 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1471913 ']' 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.667 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.667 [2024-10-13 01:17:57.213015] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:11.667 [2024-10-13 01:17:57.213105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471913 ] 00:07:11.925 [2024-10-13 01:17:57.274465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.925 [2024-10-13 01:17:57.321602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1472036 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1472036 /var/tmp/spdk2.sock 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1472036 /var/tmp/spdk2.sock 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1472036 /var/tmp/spdk2.sock 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1472036 ']' 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.183 01:17:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.183 [2024-10-13 01:17:57.655375] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:12.183 [2024-10-13 01:17:57.655462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472036 ] 00:07:12.183 [2024-10-13 01:17:57.748577] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1471913 has claimed it. 00:07:12.183 [2024-10-13 01:17:57.748643] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:13.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1472036) - No such process 00:07:13.128 ERROR: process (pid: 1472036) is no longer running 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1471913 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1471913 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.128 lslocks: write error 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1471913 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1471913 ']' 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1471913 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1471913 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1471913' 00:07:13.128 killing process with pid 1471913 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1471913 00:07:13.128 01:17:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1471913 00:07:13.693 00:07:13.693 real 0m1.893s 00:07:13.694 user 0m2.068s 00:07:13.694 sys 0m0.635s 00:07:13.694 01:17:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.694 01:17:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.694 ************************************ 00:07:13.694 END TEST locking_app_on_locked_coremask 00:07:13.694 ************************************ 00:07:13.694 01:17:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:13.694 01:17:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.694 01:17:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.694 01:17:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.694 ************************************ 00:07:13.694 START TEST locking_overlapped_coremask 00:07:13.694 ************************************ 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1472210 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1472210 /var/tmp/spdk.sock 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1472210 ']' 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.694 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.694 [2024-10-13 01:17:59.158062] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:13.694 [2024-10-13 01:17:59.158168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472210 ] 00:07:13.694 [2024-10-13 01:17:59.216340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.694 [2024-10-13 01:17:59.267738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.694 [2024-10-13 01:17:59.267796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.694 [2024-10-13 01:17:59.267799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1472216 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1472216 /var/tmp/spdk2.sock 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1472216 /var/tmp/spdk2.sock 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1472216 /var/tmp/spdk2.sock 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1472216 ']' 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.260 01:17:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.260 [2024-10-13 01:17:59.596528] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:14.260 [2024-10-13 01:17:59.596610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472216 ] 00:07:14.260 [2024-10-13 01:17:59.689590] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1472210 has claimed it. 00:07:14.260 [2024-10-13 01:17:59.689644] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:14.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1472216) - No such process 00:07:14.826 ERROR: process (pid: 1472216) is no longer running 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1472210 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1472210 ']' 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1472210 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472210 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472210' 00:07:14.826 killing process with pid 1472210 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1472210 00:07:14.826 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1472210 00:07:15.391 00:07:15.391 real 0m1.629s 00:07:15.391 user 0m4.582s 00:07:15.391 sys 0m0.505s 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.391 ************************************ 00:07:15.391 END TEST locking_overlapped_coremask 00:07:15.391 ************************************ 00:07:15.391 01:18:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:15.391 01:18:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.391 01:18:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.391 01:18:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.391 ************************************ 00:07:15.391 START TEST locking_overlapped_coremask_via_rpc 00:07:15.391 ************************************ 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1472507 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1472507 /var/tmp/spdk.sock 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1472507 ']' 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.391 01:18:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.391 [2024-10-13 01:18:00.835418] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:15.391 [2024-10-13 01:18:00.835541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472507 ] 00:07:15.391 [2024-10-13 01:18:00.897258] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.391 [2024-10-13 01:18:00.897300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.391 [2024-10-13 01:18:00.949513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.391 [2024-10-13 01:18:00.949571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.391 [2024-10-13 01:18:00.949575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1472583 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1472583 /var/tmp/spdk2.sock 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1472583 ']' 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.649 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.907 [2024-10-13 01:18:01.264567] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:15.907 [2024-10-13 01:18:01.264659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472583 ] 00:07:15.907 [2024-10-13 01:18:01.354334] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.907 [2024-10-13 01:18:01.354372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.907 [2024-10-13 01:18:01.452379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.907 [2024-10-13 01:18:01.455528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:15.907 [2024-10-13 01:18:01.455530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.471 [2024-10-13 01:18:01.964564] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1472507 has claimed it. 00:07:16.471 request: 00:07:16.471 { 00:07:16.471 "method": "framework_enable_cpumask_locks", 00:07:16.471 "req_id": 1 00:07:16.471 } 00:07:16.471 Got JSON-RPC error response 00:07:16.471 response: 00:07:16.471 { 00:07:16.471 "code": -32603, 00:07:16.471 "message": "Failed to claim CPU core: 2" 00:07:16.471 } 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1472507 /var/tmp/spdk.sock 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1472507 ']' 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.471 01:18:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.728 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.728 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:16.728 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1472583 /var/tmp/spdk2.sock 00:07:16.728 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1472583 ']' 00:07:16.728 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.728 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.728 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.728 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.728 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.986 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.986 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:16.986 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:16.986 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.986 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.986 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.986 00:07:16.986 real 0m1.734s 00:07:16.986 user 0m0.934s 00:07:16.986 sys 0m0.137s 00:07:16.986 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.986 01:18:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.986 ************************************ 00:07:16.986 END TEST locking_overlapped_coremask_via_rpc 00:07:16.986 ************************************ 00:07:16.986 01:18:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:16.986 01:18:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1472507 ]] 00:07:16.986 01:18:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1472507 00:07:16.986 01:18:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1472507 ']' 00:07:16.986 01:18:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1472507 00:07:16.986 01:18:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:16.986 01:18:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.986 01:18:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472507 00:07:17.243 01:18:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.243 01:18:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.243 01:18:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472507' 00:07:17.243 killing process with pid 1472507 00:07:17.243 01:18:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1472507 00:07:17.243 01:18:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1472507 00:07:17.501 01:18:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1472583 ]] 00:07:17.501 01:18:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1472583 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1472583 ']' 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1472583 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1472583 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1472583' 00:07:17.501 killing process with pid 1472583 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1472583 00:07:17.501 01:18:02 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1472583 00:07:18.067 01:18:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.067 01:18:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:18.067 01:18:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1472507 ]] 00:07:18.067 01:18:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1472507 00:07:18.067 01:18:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1472507 ']' 00:07:18.067 01:18:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1472507 00:07:18.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1472507) - No such process 00:07:18.067 01:18:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1472507 is not found' 00:07:18.067 Process with pid 1472507 is not found 00:07:18.067 01:18:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1472583 ]] 00:07:18.067 01:18:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1472583 00:07:18.067 01:18:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1472583 ']' 00:07:18.067 01:18:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1472583 00:07:18.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1472583) - No such process 00:07:18.067 01:18:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1472583 is not found' 00:07:18.067 Process with pid 1472583 is not found 00:07:18.067 01:18:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.067 00:07:18.067 real 0m14.696s 00:07:18.067 user 0m25.690s 00:07:18.067 sys 0m5.299s 00:07:18.067 01:18:03 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.067 01:18:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.067 ************************************ 00:07:18.067 END TEST cpu_locks 00:07:18.067 ************************************ 00:07:18.067 00:07:18.067 real 0m39.240s 00:07:18.067 user 1m16.395s 00:07:18.067 sys 0m9.346s 00:07:18.067 01:18:03 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.067 01:18:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.067 ************************************ 00:07:18.067 END TEST event 00:07:18.067 ************************************ 00:07:18.067 01:18:03 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:18.067 01:18:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.067 01:18:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.067 01:18:03 -- common/autotest_common.sh@10 -- # set +x 00:07:18.067 ************************************ 00:07:18.067 START TEST thread 00:07:18.067 ************************************ 00:07:18.067 01:18:03 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:18.067 * Looking for test storage... 00:07:18.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:18.067 01:18:03 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:18.067 01:18:03 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:18.067 01:18:03 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:18.067 01:18:03 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:18.067 01:18:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.067 01:18:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.067 01:18:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.067 01:18:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.067 01:18:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.067 01:18:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.067 01:18:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.067 01:18:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.067 01:18:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.067 01:18:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.067 01:18:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.067 01:18:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:18.067 01:18:03 thread -- scripts/common.sh@345 -- # : 1 00:07:18.067 01:18:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.068 01:18:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.068 01:18:03 thread -- scripts/common.sh@365 -- # decimal 1 00:07:18.068 01:18:03 thread -- scripts/common.sh@353 -- # local d=1 00:07:18.068 01:18:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.068 01:18:03 thread -- scripts/common.sh@355 -- # echo 1 00:07:18.068 01:18:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.068 01:18:03 thread -- scripts/common.sh@366 -- # decimal 2 00:07:18.068 01:18:03 thread -- scripts/common.sh@353 -- # local d=2 00:07:18.068 01:18:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.068 01:18:03 thread -- scripts/common.sh@355 -- # echo 2 00:07:18.068 01:18:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.068 01:18:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.068 01:18:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.068 01:18:03 thread -- scripts/common.sh@368 -- # return 0 00:07:18.068 01:18:03 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.068 01:18:03 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:18.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.068 --rc genhtml_branch_coverage=1 00:07:18.068 --rc genhtml_function_coverage=1 00:07:18.068 --rc genhtml_legend=1 00:07:18.068 --rc geninfo_all_blocks=1 00:07:18.068 --rc geninfo_unexecuted_blocks=1 00:07:18.068 00:07:18.068 ' 00:07:18.068 01:18:03 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:18.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.068 --rc genhtml_branch_coverage=1 00:07:18.068 --rc genhtml_function_coverage=1 00:07:18.068 --rc genhtml_legend=1 00:07:18.068 --rc geninfo_all_blocks=1 00:07:18.068 --rc geninfo_unexecuted_blocks=1 00:07:18.068 00:07:18.068 ' 00:07:18.068 01:18:03 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:18.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.068 --rc genhtml_branch_coverage=1 00:07:18.068 --rc genhtml_function_coverage=1 00:07:18.068 --rc genhtml_legend=1 00:07:18.068 --rc geninfo_all_blocks=1 00:07:18.068 --rc geninfo_unexecuted_blocks=1 00:07:18.068 00:07:18.068 ' 00:07:18.068 01:18:03 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:18.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.068 --rc genhtml_branch_coverage=1 00:07:18.068 --rc genhtml_function_coverage=1 00:07:18.068 --rc genhtml_legend=1 00:07:18.068 --rc geninfo_all_blocks=1 00:07:18.068 --rc geninfo_unexecuted_blocks=1 00:07:18.068 00:07:18.068 ' 00:07:18.068 01:18:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.068 01:18:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:18.068 01:18:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.068 01:18:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.068 ************************************ 00:07:18.068 START TEST thread_poller_perf 00:07:18.068 ************************************ 00:07:18.068 01:18:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.068 [2024-10-13 01:18:03.636025] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:18.068 [2024-10-13 01:18:03.636092] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472991 ] 00:07:18.326 [2024-10-13 01:18:03.695716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.326 [2024-10-13 01:18:03.743038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.326 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:19.259 [2024-10-12T23:18:04.837Z] ====================================== 00:07:19.259 [2024-10-12T23:18:04.837Z] busy:2708760271 (cyc) 00:07:19.259 [2024-10-12T23:18:04.837Z] total_run_count: 292000 00:07:19.259 [2024-10-12T23:18:04.837Z] tsc_hz: 2700000000 (cyc) 00:07:19.259 [2024-10-12T23:18:04.837Z] ====================================== 00:07:19.259 [2024-10-12T23:18:04.837Z] poller_cost: 9276 (cyc), 3435 (nsec) 00:07:19.259 00:07:19.259 real 0m1.176s 00:07:19.259 user 0m1.101s 00:07:19.259 sys 0m0.070s 00:07:19.259 01:18:04 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.259 01:18:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:19.259 ************************************ 00:07:19.259 END TEST thread_poller_perf 00:07:19.259 ************************************ 00:07:19.259 01:18:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.259 01:18:04 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:19.259 01:18:04 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.259 01:18:04 thread -- common/autotest_common.sh@10 -- # set +x 00:07:19.517 ************************************ 00:07:19.517 START TEST thread_poller_perf 00:07:19.517 ************************************ 00:07:19.517 01:18:04 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:19.517 [2024-10-13 01:18:04.864240] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:19.517 [2024-10-13 01:18:04.864307] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473145 ] 00:07:19.517 [2024-10-13 01:18:04.926023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.517 [2024-10-13 01:18:04.974866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.517 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:20.451 [2024-10-12T23:18:06.029Z] ====================================== 00:07:20.451 [2024-10-12T23:18:06.029Z] busy:2702655884 (cyc) 00:07:20.451 [2024-10-12T23:18:06.029Z] total_run_count: 3802000 00:07:20.451 [2024-10-12T23:18:06.029Z] tsc_hz: 2700000000 (cyc) 00:07:20.451 [2024-10-12T23:18:06.029Z] ====================================== 00:07:20.451 [2024-10-12T23:18:06.029Z] poller_cost: 710 (cyc), 262 (nsec) 00:07:20.451 00:07:20.451 real 0m1.174s 00:07:20.451 user 0m1.098s 00:07:20.451 sys 0m0.070s 00:07:20.451 01:18:06 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.451 01:18:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.451 ************************************ 00:07:20.451 END TEST thread_poller_perf 00:07:20.451 ************************************ 00:07:20.710 01:18:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:20.710 00:07:20.710 real 0m2.600s 00:07:20.710 user 0m2.338s 00:07:20.710 sys 0m0.264s 00:07:20.710 01:18:06 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.710 01:18:06 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.710 ************************************ 00:07:20.710 END TEST thread 00:07:20.710 ************************************ 00:07:20.710 01:18:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:20.710 01:18:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:20.710 01:18:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.710 01:18:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.710 01:18:06 -- common/autotest_common.sh@10 -- # set +x 00:07:20.710 ************************************ 00:07:20.710 START TEST app_cmdline 00:07:20.710 ************************************ 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:20.710 * Looking for test storage... 00:07:20.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.710 01:18:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:20.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.710 --rc genhtml_branch_coverage=1 00:07:20.710 --rc genhtml_function_coverage=1 00:07:20.710 --rc genhtml_legend=1 00:07:20.710 --rc geninfo_all_blocks=1 00:07:20.710 --rc geninfo_unexecuted_blocks=1 00:07:20.710 00:07:20.710 ' 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:20.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.710 --rc genhtml_branch_coverage=1 00:07:20.710 --rc genhtml_function_coverage=1 00:07:20.710 --rc genhtml_legend=1 00:07:20.710 --rc geninfo_all_blocks=1 00:07:20.710 --rc geninfo_unexecuted_blocks=1 00:07:20.710 00:07:20.710 ' 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:20.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.710 --rc genhtml_branch_coverage=1 00:07:20.710 --rc genhtml_function_coverage=1 00:07:20.710 --rc genhtml_legend=1 00:07:20.710 --rc geninfo_all_blocks=1 00:07:20.710 --rc geninfo_unexecuted_blocks=1 00:07:20.710 00:07:20.710 ' 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:20.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.710 --rc genhtml_branch_coverage=1 00:07:20.710 --rc genhtml_function_coverage=1 00:07:20.710 --rc genhtml_legend=1 00:07:20.710 --rc geninfo_all_blocks=1 00:07:20.710 --rc geninfo_unexecuted_blocks=1 00:07:20.710 00:07:20.710 ' 00:07:20.710 01:18:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:20.710 01:18:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1473354 00:07:20.710 01:18:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:20.710 01:18:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1473354 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1473354 ']' 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.710 01:18:06 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.711 01:18:06 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.711 01:18:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.969 [2024-10-13 01:18:06.304420] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:20.969 [2024-10-13 01:18:06.304532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473354 ] 00:07:20.969 [2024-10-13 01:18:06.366077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.969 [2024-10-13 01:18:06.418320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.227 01:18:06 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.227 01:18:06 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:21.227 01:18:06 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:21.485 { 00:07:21.485 "version": "SPDK v25.01-pre git sha1 bbce7a874", 00:07:21.485 "fields": { 00:07:21.485 "major": 25, 00:07:21.485 "minor": 1, 00:07:21.485 "patch": 0, 00:07:21.485 "suffix": "-pre", 00:07:21.485 "commit": "bbce7a874" 00:07:21.485 } 00:07:21.485 } 00:07:21.485 01:18:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:21.485 01:18:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:21.485 01:18:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:21.485 01:18:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:21.485 01:18:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:21.485 01:18:06 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.485 01:18:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:21.485 01:18:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.485 01:18:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:21.485 01:18:06 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.485 01:18:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:21.485 01:18:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:21.485 01:18:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.485 01:18:07 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:21.485 01:18:07 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.486 01:18:07 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.486 01:18:07 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.486 01:18:07 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.486 01:18:07 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.486 01:18:07 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.486 01:18:07 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:21.486 01:18:07 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.486 01:18:07 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:21.486 01:18:07 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.744 request: 00:07:21.744 { 00:07:21.744 "method": "env_dpdk_get_mem_stats", 00:07:21.744 "req_id": 1 00:07:21.744 } 00:07:21.744 Got JSON-RPC error response 00:07:21.744 response: 00:07:21.744 { 00:07:21.744 "code": -32601, 00:07:21.744 "message": "Method not found" 00:07:21.744 } 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.744 01:18:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1473354 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1473354 ']' 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1473354 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473354 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473354' 00:07:21.744 killing process with pid 1473354 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@969 -- # kill 1473354 00:07:21.744 01:18:07 app_cmdline -- common/autotest_common.sh@974 -- # wait 1473354 00:07:22.356 00:07:22.356 real 0m1.598s 00:07:22.356 user 0m1.983s 00:07:22.356 sys 0m0.498s 00:07:22.356 01:18:07 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.356 01:18:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.356 ************************************ 00:07:22.356 END TEST app_cmdline 00:07:22.356 ************************************ 00:07:22.356 01:18:07 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:22.356 01:18:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.356 01:18:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.356 01:18:07 -- common/autotest_common.sh@10 -- # set +x 00:07:22.356 ************************************ 00:07:22.356 START TEST version 00:07:22.356 ************************************ 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:22.356 * Looking for test storage... 00:07:22.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.356 01:18:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.356 01:18:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.356 01:18:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.356 01:18:07 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.356 01:18:07 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.356 01:18:07 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.356 01:18:07 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.356 01:18:07 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.356 01:18:07 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.356 01:18:07 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.356 01:18:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.356 01:18:07 version -- scripts/common.sh@344 -- # case "$op" in 00:07:22.356 01:18:07 version -- scripts/common.sh@345 -- # : 1 00:07:22.356 01:18:07 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.356 01:18:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.356 01:18:07 version -- scripts/common.sh@365 -- # decimal 1 00:07:22.356 01:18:07 version -- scripts/common.sh@353 -- # local d=1 00:07:22.356 01:18:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.356 01:18:07 version -- scripts/common.sh@355 -- # echo 1 00:07:22.356 01:18:07 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.356 01:18:07 version -- scripts/common.sh@366 -- # decimal 2 00:07:22.356 01:18:07 version -- scripts/common.sh@353 -- # local d=2 00:07:22.356 01:18:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.356 01:18:07 version -- scripts/common.sh@355 -- # echo 2 00:07:22.356 01:18:07 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.356 01:18:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.356 01:18:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.356 01:18:07 version -- scripts/common.sh@368 -- # return 0 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:22.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.356 --rc genhtml_branch_coverage=1 00:07:22.356 --rc genhtml_function_coverage=1 00:07:22.356 --rc genhtml_legend=1 00:07:22.356 --rc geninfo_all_blocks=1 00:07:22.356 --rc geninfo_unexecuted_blocks=1 00:07:22.356 00:07:22.356 ' 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:22.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.356 --rc genhtml_branch_coverage=1 00:07:22.356 --rc genhtml_function_coverage=1 00:07:22.356 --rc genhtml_legend=1 00:07:22.356 --rc geninfo_all_blocks=1 00:07:22.356 --rc geninfo_unexecuted_blocks=1 00:07:22.356 00:07:22.356 ' 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:22.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.356 --rc genhtml_branch_coverage=1 00:07:22.356 --rc genhtml_function_coverage=1 00:07:22.356 --rc genhtml_legend=1 00:07:22.356 --rc geninfo_all_blocks=1 00:07:22.356 --rc geninfo_unexecuted_blocks=1 00:07:22.356 00:07:22.356 ' 00:07:22.356 01:18:07 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:22.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.356 --rc genhtml_branch_coverage=1 00:07:22.356 --rc genhtml_function_coverage=1 00:07:22.356 --rc genhtml_legend=1 00:07:22.356 --rc geninfo_all_blocks=1 00:07:22.356 --rc geninfo_unexecuted_blocks=1 00:07:22.356 00:07:22.356 ' 00:07:22.356 01:18:07 version -- app/version.sh@17 -- # get_header_version major 00:07:22.356 01:18:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.356 01:18:07 version -- app/version.sh@14 -- # cut -f2 00:07:22.356 01:18:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.356 01:18:07 version -- app/version.sh@17 -- # major=25 00:07:22.356 01:18:07 version -- app/version.sh@18 -- # get_header_version minor 00:07:22.356 01:18:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.356 01:18:07 version -- app/version.sh@14 -- # cut -f2 00:07:22.356 01:18:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.356 01:18:07 version -- app/version.sh@18 -- # minor=1 00:07:22.356 01:18:07 version -- app/version.sh@19 -- # get_header_version patch 00:07:22.356 01:18:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.356 01:18:07 version -- app/version.sh@14 -- # cut -f2 00:07:22.356 01:18:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.356 01:18:07 version -- app/version.sh@19 -- # patch=0 00:07:22.356 01:18:07 version -- app/version.sh@20 -- # get_header_version suffix 00:07:22.356 01:18:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.356 01:18:07 version -- app/version.sh@14 -- # cut -f2 00:07:22.356 01:18:07 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.356 01:18:07 version -- app/version.sh@20 -- # suffix=-pre 00:07:22.356 01:18:07 version -- app/version.sh@22 -- # version=25.1 00:07:22.356 01:18:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:22.356 01:18:07 version -- app/version.sh@28 -- # version=25.1rc0 00:07:22.640 01:18:07 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.640 01:18:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:22.640 01:18:07 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:22.640 01:18:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:22.640 00:07:22.640 real 0m0.196s 00:07:22.640 user 0m0.123s 00:07:22.640 sys 0m0.099s 00:07:22.640 01:18:07 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.640 01:18:07 version -- common/autotest_common.sh@10 -- # set +x 00:07:22.640 ************************************ 00:07:22.640 END TEST version 00:07:22.640 ************************************ 00:07:22.640 01:18:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:22.640 01:18:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:22.640 01:18:07 -- spdk/autotest.sh@194 -- # uname -s 00:07:22.640 01:18:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:22.640 01:18:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:22.640 01:18:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:22.640 01:18:07 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:22.640 01:18:07 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:07:22.640 01:18:07 -- spdk/autotest.sh@256 -- # timing_exit lib 00:07:22.640 01:18:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.640 01:18:07 -- common/autotest_common.sh@10 -- # set +x 00:07:22.640 01:18:07 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:07:22.640 01:18:07 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:07:22.640 01:18:07 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:07:22.640 01:18:07 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:07:22.641 01:18:07 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:07:22.641 01:18:07 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:07:22.641 01:18:07 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:22.641 01:18:07 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.641 01:18:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.641 01:18:07 -- common/autotest_common.sh@10 -- # set +x 00:07:22.641 ************************************ 00:07:22.641 START TEST nvmf_tcp 00:07:22.641 ************************************ 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:22.641 * Looking for test storage... 00:07:22.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.641 01:18:08 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:22.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.641 --rc genhtml_branch_coverage=1 00:07:22.641 --rc genhtml_function_coverage=1 00:07:22.641 --rc genhtml_legend=1 00:07:22.641 --rc geninfo_all_blocks=1 00:07:22.641 --rc geninfo_unexecuted_blocks=1 00:07:22.641 00:07:22.641 ' 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:22.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.641 --rc genhtml_branch_coverage=1 00:07:22.641 --rc genhtml_function_coverage=1 00:07:22.641 --rc genhtml_legend=1 00:07:22.641 --rc geninfo_all_blocks=1 00:07:22.641 --rc geninfo_unexecuted_blocks=1 00:07:22.641 00:07:22.641 ' 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:22.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.641 --rc genhtml_branch_coverage=1 00:07:22.641 --rc genhtml_function_coverage=1 00:07:22.641 --rc genhtml_legend=1 00:07:22.641 --rc geninfo_all_blocks=1 00:07:22.641 --rc geninfo_unexecuted_blocks=1 00:07:22.641 00:07:22.641 ' 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:22.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.641 --rc genhtml_branch_coverage=1 00:07:22.641 --rc genhtml_function_coverage=1 00:07:22.641 --rc genhtml_legend=1 00:07:22.641 --rc geninfo_all_blocks=1 00:07:22.641 --rc geninfo_unexecuted_blocks=1 00:07:22.641 00:07:22.641 ' 00:07:22.641 01:18:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:22.641 01:18:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:22.641 01:18:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.641 01:18:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.641 ************************************ 00:07:22.641 START TEST nvmf_target_core 00:07:22.641 ************************************ 00:07:22.641 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:22.641 * Looking for test storage... 00:07:22.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:22.641 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.641 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.641 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:22.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.900 --rc genhtml_branch_coverage=1 00:07:22.900 --rc genhtml_function_coverage=1 00:07:22.900 --rc genhtml_legend=1 00:07:22.900 --rc geninfo_all_blocks=1 00:07:22.900 --rc geninfo_unexecuted_blocks=1 00:07:22.900 00:07:22.900 ' 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:22.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.900 --rc genhtml_branch_coverage=1 00:07:22.900 --rc genhtml_function_coverage=1 00:07:22.900 --rc genhtml_legend=1 00:07:22.900 --rc geninfo_all_blocks=1 00:07:22.900 --rc geninfo_unexecuted_blocks=1 00:07:22.900 00:07:22.900 ' 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:22.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.900 --rc genhtml_branch_coverage=1 00:07:22.900 --rc genhtml_function_coverage=1 00:07:22.900 --rc genhtml_legend=1 00:07:22.900 --rc geninfo_all_blocks=1 00:07:22.900 --rc geninfo_unexecuted_blocks=1 00:07:22.900 00:07:22.900 ' 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:22.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.900 --rc genhtml_branch_coverage=1 00:07:22.900 --rc genhtml_function_coverage=1 00:07:22.900 --rc genhtml_legend=1 00:07:22.900 --rc geninfo_all_blocks=1 00:07:22.900 --rc geninfo_unexecuted_blocks=1 00:07:22.900 00:07:22.900 ' 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.900 01:18:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.901 ************************************ 00:07:22.901 START TEST nvmf_abort 00:07:22.901 ************************************ 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:22.901 * Looking for test storage... 00:07:22.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.901 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:23.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.160 --rc genhtml_branch_coverage=1 00:07:23.160 --rc genhtml_function_coverage=1 00:07:23.160 --rc genhtml_legend=1 00:07:23.160 --rc geninfo_all_blocks=1 00:07:23.160 --rc geninfo_unexecuted_blocks=1 00:07:23.160 00:07:23.160 ' 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:23.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.160 --rc genhtml_branch_coverage=1 00:07:23.160 --rc genhtml_function_coverage=1 00:07:23.160 --rc genhtml_legend=1 00:07:23.160 --rc geninfo_all_blocks=1 00:07:23.160 --rc geninfo_unexecuted_blocks=1 00:07:23.160 00:07:23.160 ' 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:23.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.160 --rc genhtml_branch_coverage=1 00:07:23.160 --rc genhtml_function_coverage=1 00:07:23.160 --rc genhtml_legend=1 00:07:23.160 --rc geninfo_all_blocks=1 00:07:23.160 --rc geninfo_unexecuted_blocks=1 00:07:23.160 00:07:23.160 ' 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:23.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.160 --rc genhtml_branch_coverage=1 00:07:23.160 --rc genhtml_function_coverage=1 00:07:23.160 --rc genhtml_legend=1 00:07:23.160 --rc geninfo_all_blocks=1 00:07:23.160 --rc geninfo_unexecuted_blocks=1 00:07:23.160 00:07:23.160 ' 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.160 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:23.161 01:18:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.062 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:25.063 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:25.063 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:25.063 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:25.063 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:07:25.063 00:07:25.063 --- 10.0.0.2 ping statistics --- 00:07:25.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.063 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:07:25.063 00:07:25.063 --- 10.0.0.1 ping statistics --- 00:07:25.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.063 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:25.063 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1475941 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1475941 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1475941 ']' 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.322 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.322 [2024-10-13 01:18:10.721172] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:25.322 [2024-10-13 01:18:10.721274] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.322 [2024-10-13 01:18:10.790311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.322 [2024-10-13 01:18:10.845901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.322 [2024-10-13 01:18:10.845968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.322 [2024-10-13 01:18:10.845983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.322 [2024-10-13 01:18:10.845995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.322 [2024-10-13 01:18:10.846005] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.322 [2024-10-13 01:18:10.847512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.322 [2024-10-13 01:18:10.847546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.322 [2024-10-13 01:18:10.847550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.580 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.580 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:25.580 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:25.580 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:25.580 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.580 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.580 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:25.580 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.580 01:18:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.580 [2024-10-13 01:18:11.004050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.580 Malloc0 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.580 Delay0 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.580 [2024-10-13 01:18:11.070486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.580 01:18:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:25.838 [2024-10-13 01:18:11.217621] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:27.736 Initializing NVMe Controllers 00:07:27.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:27.736 controller IO queue size 128 less than required 00:07:27.736 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:27.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:27.736 Initialization complete. Launching workers. 00:07:27.736 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28397 00:07:27.736 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28458, failed to submit 62 00:07:27.736 success 28401, unsuccessful 57, failed 0 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.736 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.736 rmmod nvme_tcp 00:07:27.994 rmmod nvme_fabrics 00:07:27.994 rmmod nvme_keyring 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1475941 ']' 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1475941 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1475941 ']' 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1475941 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1475941 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1475941' 00:07:27.994 killing process with pid 1475941 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1475941 00:07:27.994 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1475941 00:07:28.253 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:28.253 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:28.253 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:28.253 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:28.254 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:07:28.254 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:28.254 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:07:28.254 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.254 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.254 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.254 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.254 01:18:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.158 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:30.158 00:07:30.158 real 0m7.343s 00:07:30.158 user 0m10.830s 00:07:30.158 sys 0m2.517s 00:07:30.158 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.158 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:30.158 ************************************ 00:07:30.158 END TEST nvmf_abort 00:07:30.158 ************************************ 00:07:30.158 01:18:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.158 01:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.158 01:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.158 01:18:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.417 ************************************ 00:07:30.417 START TEST nvmf_ns_hotplug_stress 00:07:30.417 ************************************ 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:30.417 * Looking for test storage... 00:07:30.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.417 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:30.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.418 --rc genhtml_branch_coverage=1 00:07:30.418 --rc genhtml_function_coverage=1 00:07:30.418 --rc genhtml_legend=1 00:07:30.418 --rc geninfo_all_blocks=1 00:07:30.418 --rc geninfo_unexecuted_blocks=1 00:07:30.418 00:07:30.418 ' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:30.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.418 --rc genhtml_branch_coverage=1 00:07:30.418 --rc genhtml_function_coverage=1 00:07:30.418 --rc genhtml_legend=1 00:07:30.418 --rc geninfo_all_blocks=1 00:07:30.418 --rc geninfo_unexecuted_blocks=1 00:07:30.418 00:07:30.418 ' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:30.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.418 --rc genhtml_branch_coverage=1 00:07:30.418 --rc genhtml_function_coverage=1 00:07:30.418 --rc genhtml_legend=1 00:07:30.418 --rc geninfo_all_blocks=1 00:07:30.418 --rc geninfo_unexecuted_blocks=1 00:07:30.418 00:07:30.418 ' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:30.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.418 --rc genhtml_branch_coverage=1 00:07:30.418 --rc genhtml_function_coverage=1 00:07:30.418 --rc genhtml_legend=1 00:07:30.418 --rc geninfo_all_blocks=1 00:07:30.418 --rc geninfo_unexecuted_blocks=1 00:07:30.418 00:07:30.418 ' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.418 01:18:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:32.322 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:32.322 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:32.322 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:32.322 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:32.322 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:32.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:07:32.581 00:07:32.581 --- 10.0.0.2 ping statistics --- 00:07:32.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.581 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:32.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:07:32.581 00:07:32.581 --- 10.0.0.1 ping statistics --- 00:07:32.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.581 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:32.581 01:18:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1478300 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1478300 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1478300 ']' 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.581 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.581 [2024-10-13 01:18:18.056793] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:07:32.581 [2024-10-13 01:18:18.056857] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.581 [2024-10-13 01:18:18.121284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.839 [2024-10-13 01:18:18.170039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.839 [2024-10-13 01:18:18.170096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.839 [2024-10-13 01:18:18.170112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.839 [2024-10-13 01:18:18.170125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.839 [2024-10-13 01:18:18.170137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.839 [2024-10-13 01:18:18.171696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.839 [2024-10-13 01:18:18.171722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.839 [2024-10-13 01:18:18.171726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.839 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.839 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:32.839 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:32.839 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.839 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:32.839 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.839 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:32.839 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.097 [2024-10-13 01:18:18.559550] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.097 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:33.355 01:18:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.613 [2024-10-13 01:18:19.114631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.613 01:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:33.871 01:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:34.128 Malloc0 00:07:34.128 01:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.386 Delay0 00:07:34.386 01:18:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.951 01:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:34.951 NULL1 00:07:34.951 01:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:35.209 01:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1478602 00:07:35.209 01:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:35.209 01:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:35.209 01:18:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.581 Read completed with error (sct=0, sc=11) 00:07:36.581 01:18:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.838 01:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:36.838 01:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:37.096 true 00:07:37.096 01:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:37.096 01:18:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.665 01:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.229 01:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:38.230 01:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:38.230 true 00:07:38.230 01:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:38.230 01:18:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.487 01:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.745 01:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:38.745 01:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:39.002 true 00:07:39.260 01:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:39.260 01:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.561 01:18:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.561 01:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:39.561 01:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:39.818 true 00:07:39.818 01:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:39.818 01:18:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.189 01:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.189 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.189 01:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:41.190 01:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:41.448 true 00:07:41.448 01:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:41.448 01:18:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.706 01:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.963 01:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:41.963 01:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:42.221 true 00:07:42.221 01:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:42.221 01:18:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.153 01:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.153 01:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:43.153 01:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:43.411 true 00:07:43.411 01:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:43.411 01:18:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.976 01:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.976 01:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:43.976 01:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:44.234 true 00:07:44.234 01:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:44.234 01:18:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.167 01:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.167 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.425 01:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:45.425 01:18:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:45.682 true 00:07:45.682 01:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:45.682 01:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.939 01:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.197 01:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:46.197 01:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:46.454 true 00:07:46.454 01:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:46.454 01:18:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.712 01:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.970 01:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:46.970 01:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:47.228 true 00:07:47.228 01:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:47.228 01:18:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.599 01:18:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.599 01:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:48.599 01:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:48.856 true 00:07:48.856 01:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:48.857 01:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.113 01:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.371 01:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:49.371 01:18:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:49.672 true 00:07:49.672 01:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:49.672 01:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.945 01:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.202 01:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:50.202 01:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:50.459 true 00:07:50.459 01:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:50.459 01:18:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.391 01:18:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.391 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.648 01:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:51.648 01:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:51.905 true 00:07:51.905 01:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:51.905 01:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.162 01:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.419 01:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:52.419 01:18:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:52.676 true 00:07:52.676 01:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:52.676 01:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.241 01:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.241 01:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:53.241 01:18:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:53.498 true 00:07:53.498 01:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:53.498 01:18:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.871 01:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.871 01:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:54.871 01:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:55.129 true 00:07:55.386 01:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:55.386 01:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.643 01:18:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.900 01:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:55.900 01:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:56.157 true 00:07:56.157 01:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:56.157 01:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.415 01:18:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.672 01:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:56.672 01:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:56.929 true 00:07:56.929 01:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:56.929 01:18:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.861 01:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.119 01:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:58.119 01:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:58.376 true 00:07:58.376 01:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:58.376 01:18:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.633 01:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.891 01:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:58.891 01:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:59.149 true 00:07:59.149 01:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:59.149 01:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.407 01:18:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.664 01:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:59.664 01:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:59.921 true 00:07:59.921 01:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:07:59.921 01:18:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.855 01:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.112 01:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:01.112 01:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:01.370 true 00:08:01.370 01:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:08:01.370 01:18:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.627 01:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.193 01:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:02.193 01:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:02.193 true 00:08:02.193 01:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:08:02.193 01:18:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.450 01:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.707 01:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:02.707 01:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:02.965 true 00:08:03.222 01:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:08:03.222 01:18:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.154 01:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.412 01:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:04.412 01:18:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:04.669 true 00:08:04.669 01:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:08:04.669 01:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.927 01:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.184 01:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:05.184 01:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:05.442 true 00:08:05.442 01:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:08:05.442 01:18:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.700 Initializing NVMe Controllers 00:08:05.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:05.700 Controller IO queue size 128, less than required. 00:08:05.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:05.700 Controller IO queue size 128, less than required. 00:08:05.700 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:05.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:05.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:05.700 Initialization complete. Launching workers. 00:08:05.700 ======================================================== 00:08:05.700 Latency(us) 00:08:05.700 Device Information : IOPS MiB/s Average min max 00:08:05.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 604.74 0.30 86907.69 3413.65 1063813.67 00:08:05.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8636.26 4.22 14820.99 1685.12 448615.63 00:08:05.700 ======================================================== 00:08:05.700 Total : 9241.00 4.51 19538.38 1685.12 1063813.67 00:08:05.700 00:08:05.700 01:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.957 01:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:05.957 01:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:06.215 true 00:08:06.215 01:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1478602 00:08:06.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1478602) - No such process 00:08:06.215 01:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1478602 00:08:06.215 01:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.472 01:18:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.730 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:06.730 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:06.730 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:06.730 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:06.730 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:06.988 null0 00:08:06.988 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:06.988 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:06.988 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:07.245 null1 00:08:07.245 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:07.245 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.245 01:18:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:07.503 null2 00:08:07.503 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:07.503 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.503 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:07.760 null3 00:08:07.760 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:07.760 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:07.760 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:08.017 null4 00:08:08.017 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.017 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.017 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:08.274 null5 00:08:08.274 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.274 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.274 01:18:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:08.532 null6 00:08:08.532 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:08.532 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:08.532 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:08.790 null7 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.048 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1482791 1482792 1482794 1482796 1482798 1482800 1482802 1482804 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.049 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.307 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.307 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.307 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.307 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.307 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.307 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.307 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.307 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:09.564 01:18:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:09.821 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:09.821 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:09.821 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.821 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:09.821 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.821 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:09.821 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:09.821 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.078 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.336 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.336 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.336 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.336 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.336 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.336 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.336 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.336 01:18:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.902 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.160 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.160 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.160 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.160 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.160 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.160 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.160 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.160 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.418 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.418 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.418 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.418 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.418 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.419 01:18:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.677 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.677 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.677 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.677 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.677 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.677 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.677 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.677 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.935 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.936 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.936 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.936 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.936 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.936 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.936 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.193 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.193 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.193 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.193 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.193 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.193 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.193 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.193 01:18:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.451 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.708 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.005 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.005 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.005 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.005 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.005 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.005 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.005 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.005 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.289 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.547 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.547 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.547 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.547 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.547 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.547 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.547 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.547 01:18:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.805 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.063 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:14.063 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.063 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.063 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.063 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.063 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.063 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.063 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.321 01:18:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:14.580 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.580 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:14.580 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:14.580 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:14.580 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:14.580 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:14.580 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:14.580 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.146 rmmod nvme_tcp 00:08:15.146 rmmod nvme_fabrics 00:08:15.146 rmmod nvme_keyring 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1478300 ']' 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1478300 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1478300 ']' 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1478300 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1478300 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1478300' 00:08:15.146 killing process with pid 1478300 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1478300 00:08:15.146 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1478300 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.405 01:19:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.307 01:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.308 00:08:17.308 real 0m47.092s 00:08:17.308 user 3m39.825s 00:08:17.308 sys 0m15.822s 00:08:17.308 01:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.308 01:19:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.308 ************************************ 00:08:17.308 END TEST nvmf_ns_hotplug_stress 00:08:17.308 ************************************ 00:08:17.308 01:19:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:17.308 01:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:17.308 01:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.308 01:19:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:17.308 ************************************ 00:08:17.308 START TEST nvmf_delete_subsystem 00:08:17.308 ************************************ 00:08:17.308 01:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:17.566 * Looking for test storage... 00:08:17.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.566 01:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:17.566 01:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:08:17.566 01:19:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.566 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:17.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.566 --rc genhtml_branch_coverage=1 00:08:17.566 --rc genhtml_function_coverage=1 00:08:17.566 --rc genhtml_legend=1 00:08:17.566 --rc geninfo_all_blocks=1 00:08:17.566 --rc geninfo_unexecuted_blocks=1 00:08:17.567 00:08:17.567 ' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:17.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.567 --rc genhtml_branch_coverage=1 00:08:17.567 --rc genhtml_function_coverage=1 00:08:17.567 --rc genhtml_legend=1 00:08:17.567 --rc geninfo_all_blocks=1 00:08:17.567 --rc geninfo_unexecuted_blocks=1 00:08:17.567 00:08:17.567 ' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:17.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.567 --rc genhtml_branch_coverage=1 00:08:17.567 --rc genhtml_function_coverage=1 00:08:17.567 --rc genhtml_legend=1 00:08:17.567 --rc geninfo_all_blocks=1 00:08:17.567 --rc geninfo_unexecuted_blocks=1 00:08:17.567 00:08:17.567 ' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:17.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.567 --rc genhtml_branch_coverage=1 00:08:17.567 --rc genhtml_function_coverage=1 00:08:17.567 --rc genhtml_legend=1 00:08:17.567 --rc geninfo_all_blocks=1 00:08:17.567 --rc geninfo_unexecuted_blocks=1 00:08:17.567 00:08:17.567 ' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.567 01:19:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:20.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:20.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:20.100 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:20.100 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:20.100 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:20.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:08:20.101 00:08:20.101 --- 10.0.0.2 ping statistics --- 00:08:20.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.101 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:08:20.101 00:08:20.101 --- 10.0.0.1 ping statistics --- 00:08:20.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.101 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1485634 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1485634 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1485634 ']' 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.101 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.101 [2024-10-13 01:19:05.487314] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:08:20.101 [2024-10-13 01:19:05.487410] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.101 [2024-10-13 01:19:05.555089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.101 [2024-10-13 01:19:05.602648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.101 [2024-10-13 01:19:05.602713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.101 [2024-10-13 01:19:05.602729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.101 [2024-10-13 01:19:05.602742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.101 [2024-10-13 01:19:05.602754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.101 [2024-10-13 01:19:05.604165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.101 [2024-10-13 01:19:05.604171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 [2024-10-13 01:19:05.749034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 [2024-10-13 01:19:05.765277] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 NULL1 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 Delay0 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1485724 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:20.360 01:19:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:20.360 [2024-10-13 01:19:05.840111] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:22.258 01:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.258 01:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.258 01:19:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Write completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.824 starting I/O failed: -6 00:08:22.824 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 starting I/O failed: -6 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 [2024-10-13 01:19:08.132524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa604b0 is same with the state(6) to be set 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 starting I/O failed: -6 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 starting I/O failed: -6 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 starting I/O failed: -6 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 starting I/O failed: -6 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 starting I/O failed: -6 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 starting I/O failed: -6 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 starting I/O failed: -6 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Write completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:22.825 Read completed with error (sct=0, sc=8) 00:08:23.757 [2024-10-13 01:19:09.099393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6e670 is same with the state(6) to be set 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 [2024-10-13 01:19:09.129922] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa608a0 is same with the state(6) to be set 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 [2024-10-13 01:19:09.130139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60f00 is same with the state(6) to be set 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.757 Write completed with error (sct=0, sc=8) 00:08:23.757 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 [2024-10-13 01:19:09.131627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe8bc00cfe0 is same with the state(6) to be set 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Read completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 Write completed with error (sct=0, sc=8) 00:08:23.758 [2024-10-13 01:19:09.132405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe8bc00d640 is same with the state(6) to be set 00:08:23.758 Initializing NVMe Controllers 00:08:23.758 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:23.758 Controller IO queue size 128, less than required. 00:08:23.758 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:23.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:23.758 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:23.758 Initialization complete. Launching workers. 00:08:23.758 ======================================================== 00:08:23.758 Latency(us) 00:08:23.758 Device Information : IOPS MiB/s Average min max 00:08:23.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.70 0.08 908729.09 536.46 1013322.17 00:08:23.758 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.05 0.09 912235.61 622.33 1011192.48 00:08:23.758 ======================================================== 00:08:23.758 Total : 345.75 0.17 910575.42 536.46 1013322.17 00:08:23.758 00:08:23.758 [2024-10-13 01:19:09.132978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6e670 (9): Bad file descriptor 00:08:23.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:23.758 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.758 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:23.758 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1485724 00:08:23.758 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1485724 00:08:24.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1485724) - No such process 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1485724 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1485724 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1485724 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.323 [2024-10-13 01:19:09.655964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.323 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.324 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.324 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.324 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.324 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.324 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1486137 00:08:24.324 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:24.324 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1486137 00:08:24.324 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.324 01:19:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:24.324 [2024-10-13 01:19:09.712374] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:24.889 01:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.889 01:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1486137 00:08:24.889 01:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.146 01:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.146 01:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1486137 00:08:25.146 01:19:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.712 01:19:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.712 01:19:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1486137 00:08:25.712 01:19:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.277 01:19:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.277 01:19:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1486137 00:08:26.277 01:19:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.842 01:19:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.842 01:19:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1486137 00:08:26.842 01:19:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.407 01:19:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.407 01:19:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1486137 00:08:27.407 01:19:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.665 Initializing NVMe Controllers 00:08:27.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.665 Controller IO queue size 128, less than required. 00:08:27.665 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:27.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:27.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:27.665 Initialization complete. Launching workers. 00:08:27.665 ======================================================== 00:08:27.665 Latency(us) 00:08:27.665 Device Information : IOPS MiB/s Average min max 00:08:27.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004100.70 1000189.03 1042409.38 00:08:27.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004738.72 1000255.50 1011949.60 00:08:27.665 ======================================================== 00:08:27.665 Total : 256.00 0.12 1004419.71 1000189.03 1042409.38 00:08:27.665 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1486137 00:08:27.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1486137) - No such process 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1486137 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.665 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.665 rmmod nvme_tcp 00:08:27.665 rmmod nvme_fabrics 00:08:27.665 rmmod nvme_keyring 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1485634 ']' 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1485634 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1485634 ']' 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1485634 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1485634 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1485634' 00:08:27.923 killing process with pid 1485634 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1485634 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1485634 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.923 01:19:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.457 00:08:30.457 real 0m12.654s 00:08:30.457 user 0m28.316s 00:08:30.457 sys 0m3.129s 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.457 ************************************ 00:08:30.457 END TEST nvmf_delete_subsystem 00:08:30.457 ************************************ 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.457 ************************************ 00:08:30.457 START TEST nvmf_host_management 00:08:30.457 ************************************ 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:30.457 * Looking for test storage... 00:08:30.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.457 --rc genhtml_branch_coverage=1 00:08:30.457 --rc genhtml_function_coverage=1 00:08:30.457 --rc genhtml_legend=1 00:08:30.457 --rc geninfo_all_blocks=1 00:08:30.457 --rc geninfo_unexecuted_blocks=1 00:08:30.457 00:08:30.457 ' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.457 --rc genhtml_branch_coverage=1 00:08:30.457 --rc genhtml_function_coverage=1 00:08:30.457 --rc genhtml_legend=1 00:08:30.457 --rc geninfo_all_blocks=1 00:08:30.457 --rc geninfo_unexecuted_blocks=1 00:08:30.457 00:08:30.457 ' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.457 --rc genhtml_branch_coverage=1 00:08:30.457 --rc genhtml_function_coverage=1 00:08:30.457 --rc genhtml_legend=1 00:08:30.457 --rc geninfo_all_blocks=1 00:08:30.457 --rc geninfo_unexecuted_blocks=1 00:08:30.457 00:08:30.457 ' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.457 --rc genhtml_branch_coverage=1 00:08:30.457 --rc genhtml_function_coverage=1 00:08:30.457 --rc genhtml_legend=1 00:08:30.457 --rc geninfo_all_blocks=1 00:08:30.457 --rc geninfo_unexecuted_blocks=1 00:08:30.457 00:08:30.457 ' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.457 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.458 01:19:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:32.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:32.358 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.358 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:32.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:32.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.359 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:08:32.617 00:08:32.617 --- 10.0.0.2 ping statistics --- 00:08:32.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.617 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:08:32.617 00:08:32.617 --- 10.0.0.1 ping statistics --- 00:08:32.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.617 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1488619 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1488619 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1488619 ']' 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.617 01:19:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.617 [2024-10-13 01:19:18.041872] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:08:32.617 [2024-10-13 01:19:18.041959] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.617 [2024-10-13 01:19:18.109581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.617 [2024-10-13 01:19:18.161646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.617 [2024-10-13 01:19:18.161696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.617 [2024-10-13 01:19:18.161714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.617 [2024-10-13 01:19:18.161727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.617 [2024-10-13 01:19:18.161738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.617 [2024-10-13 01:19:18.163313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.617 [2024-10-13 01:19:18.163399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.617 [2024-10-13 01:19:18.163496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:32.617 [2024-10-13 01:19:18.163499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.875 [2024-10-13 01:19:18.306419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.875 Malloc0 00:08:32.875 [2024-10-13 01:19:18.380290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:32.875 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1488672 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1488672 /var/tmp/bdevperf.sock 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1488672 ']' 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:32.876 { 00:08:32.876 "params": { 00:08:32.876 "name": "Nvme$subsystem", 00:08:32.876 "trtype": "$TEST_TRANSPORT", 00:08:32.876 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.876 "adrfam": "ipv4", 00:08:32.876 "trsvcid": "$NVMF_PORT", 00:08:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.876 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.876 "hdgst": ${hdgst:-false}, 00:08:32.876 "ddgst": ${ddgst:-false} 00:08:32.876 }, 00:08:32.876 "method": "bdev_nvme_attach_controller" 00:08:32.876 } 00:08:32.876 EOF 00:08:32.876 )") 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:32.876 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:32.876 "params": { 00:08:32.876 "name": "Nvme0", 00:08:32.876 "trtype": "tcp", 00:08:32.876 "traddr": "10.0.0.2", 00:08:32.876 "adrfam": "ipv4", 00:08:32.876 "trsvcid": "4420", 00:08:32.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:32.876 "hdgst": false, 00:08:32.876 "ddgst": false 00:08:32.876 }, 00:08:32.876 "method": "bdev_nvme_attach_controller" 00:08:32.876 }' 00:08:33.133 [2024-10-13 01:19:18.465516] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:08:33.133 [2024-10-13 01:19:18.465601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488672 ] 00:08:33.133 [2024-10-13 01:19:18.525463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.133 [2024-10-13 01:19:18.572592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.391 Running I/O for 10 seconds... 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:33.391 01:19:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=559 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 559 -ge 100 ']' 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.650 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.650 [2024-10-13 01:19:19.142807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aab00 is same with the state(6) to be set 00:08:33.650 [2024-10-13 01:19:19.142883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aab00 is same with the state(6) to be set 00:08:33.650 [2024-10-13 01:19:19.142900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18aab00 is same with the state(6) to be set 00:08:33.650 [2024-10-13 01:19:19.145177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:33.650 [2024-10-13 01:19:19.145218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.650 [2024-10-13 01:19:19.145236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:33.650 [2024-10-13 01:19:19.145251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.650 [2024-10-13 01:19:19.145265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:33.650 [2024-10-13 01:19:19.145280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.145294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:33.651 [2024-10-13 01:19:19.145307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.145321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea5e30 is same with the state(6) to be set 00:08:33.651 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.651 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:33.651 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.651 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.651 [2024-10-13 01:19:19.151420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.151975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.151989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.651 [2024-10-13 01:19:19.152530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.651 [2024-10-13 01:19:19.152550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.152972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.152985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:33.652 [2024-10-13 01:19:19.153377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:33.652 [2024-10-13 01:19:19.153475] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ea20c0 was disconnected and freed. reset controller. 00:08:33.652 [2024-10-13 01:19:19.154591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:33.652 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.652 01:19:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:33.652 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:33.652 00:08:33.652 Latency(us) 00:08:33.652 [2024-10-12T23:19:19.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.652 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:33.652 Job: Nvme0n1 ended in about 0.40 seconds with error 00:08:33.652 Verification LBA range: start 0x0 length 0x400 00:08:33.652 Nvme0n1 : 0.40 1606.07 100.38 160.61 0.00 35176.56 2415.12 34175.81 00:08:33.652 [2024-10-12T23:19:19.230Z] =================================================================================================================== 00:08:33.652 [2024-10-12T23:19:19.230Z] Total : 1606.07 100.38 160.61 0.00 35176.56 2415.12 34175.81 00:08:33.652 [2024-10-13 01:19:19.156437] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.652 [2024-10-13 01:19:19.156493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ea5e30 (9): Bad file descriptor 00:08:33.652 [2024-10-13 01:19:19.165079] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1488672 00:08:34.589 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1488672) - No such process 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:34.589 { 00:08:34.589 "params": { 00:08:34.589 "name": "Nvme$subsystem", 00:08:34.589 "trtype": "$TEST_TRANSPORT", 00:08:34.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:34.589 "adrfam": "ipv4", 00:08:34.589 "trsvcid": "$NVMF_PORT", 00:08:34.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:34.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:34.589 "hdgst": ${hdgst:-false}, 00:08:34.589 "ddgst": ${ddgst:-false} 00:08:34.589 }, 00:08:34.589 "method": "bdev_nvme_attach_controller" 00:08:34.589 } 00:08:34.589 EOF 00:08:34.589 )") 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:08:34.589 01:19:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:34.589 "params": { 00:08:34.589 "name": "Nvme0", 00:08:34.589 "trtype": "tcp", 00:08:34.589 "traddr": "10.0.0.2", 00:08:34.589 "adrfam": "ipv4", 00:08:34.589 "trsvcid": "4420", 00:08:34.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:34.589 "hdgst": false, 00:08:34.589 "ddgst": false 00:08:34.589 }, 00:08:34.589 "method": "bdev_nvme_attach_controller" 00:08:34.589 }' 00:08:34.847 [2024-10-13 01:19:20.208504] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:08:34.847 [2024-10-13 01:19:20.208589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1488876 ] 00:08:34.847 [2024-10-13 01:19:20.270532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.847 [2024-10-13 01:19:20.319697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.105 Running I/O for 1 seconds... 00:08:36.037 1664.00 IOPS, 104.00 MiB/s 00:08:36.037 Latency(us) 00:08:36.037 [2024-10-12T23:19:21.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.037 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:36.038 Verification LBA range: start 0x0 length 0x400 00:08:36.038 Nvme0n1 : 1.03 1672.60 104.54 0.00 0.00 37646.71 7136.14 33593.27 00:08:36.038 [2024-10-12T23:19:21.616Z] =================================================================================================================== 00:08:36.038 [2024-10-12T23:19:21.616Z] Total : 1672.60 104.54 0.00 0.00 37646.71 7136.14 33593.27 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.295 rmmod nvme_tcp 00:08:36.295 rmmod nvme_fabrics 00:08:36.295 rmmod nvme_keyring 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1488619 ']' 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1488619 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1488619 ']' 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1488619 00:08:36.295 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:36.296 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.296 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1488619 00:08:36.296 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:36.296 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:36.296 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1488619' 00:08:36.296 killing process with pid 1488619 00:08:36.296 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1488619 00:08:36.296 01:19:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1488619 00:08:36.554 [2024-10-13 01:19:22.037749] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.554 01:19:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:39.089 00:08:39.089 real 0m8.537s 00:08:39.089 user 0m18.456s 00:08:39.089 sys 0m2.653s 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.089 ************************************ 00:08:39.089 END TEST nvmf_host_management 00:08:39.089 ************************************ 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.089 ************************************ 00:08:39.089 START TEST nvmf_lvol 00:08:39.089 ************************************ 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:39.089 * Looking for test storage... 00:08:39.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:39.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.089 --rc genhtml_branch_coverage=1 00:08:39.089 --rc genhtml_function_coverage=1 00:08:39.089 --rc genhtml_legend=1 00:08:39.089 --rc geninfo_all_blocks=1 00:08:39.089 --rc geninfo_unexecuted_blocks=1 00:08:39.089 00:08:39.089 ' 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:39.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.089 --rc genhtml_branch_coverage=1 00:08:39.089 --rc genhtml_function_coverage=1 00:08:39.089 --rc genhtml_legend=1 00:08:39.089 --rc geninfo_all_blocks=1 00:08:39.089 --rc geninfo_unexecuted_blocks=1 00:08:39.089 00:08:39.089 ' 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:39.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.089 --rc genhtml_branch_coverage=1 00:08:39.089 --rc genhtml_function_coverage=1 00:08:39.089 --rc genhtml_legend=1 00:08:39.089 --rc geninfo_all_blocks=1 00:08:39.089 --rc geninfo_unexecuted_blocks=1 00:08:39.089 00:08:39.089 ' 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:39.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.089 --rc genhtml_branch_coverage=1 00:08:39.089 --rc genhtml_function_coverage=1 00:08:39.089 --rc genhtml_legend=1 00:08:39.089 --rc geninfo_all_blocks=1 00:08:39.089 --rc geninfo_unexecuted_blocks=1 00:08:39.089 00:08:39.089 ' 00:08:39.089 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.090 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.090 01:19:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:40.991 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.991 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.991 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.991 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.991 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.991 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.991 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:40.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:40.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:40.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:40.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:08:40.992 00:08:40.992 --- 10.0.0.2 ping statistics --- 00:08:40.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.992 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:40.992 00:08:40.992 --- 10.0.0.1 ping statistics --- 00:08:40.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.992 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1491046 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1491046 00:08:40.992 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1491046 ']' 00:08:40.993 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.993 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.993 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.993 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.993 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:41.251 [2024-10-13 01:19:26.609306] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:08:41.251 [2024-10-13 01:19:26.609405] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.251 [2024-10-13 01:19:26.678753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:41.251 [2024-10-13 01:19:26.730691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.251 [2024-10-13 01:19:26.730754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.251 [2024-10-13 01:19:26.730770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.251 [2024-10-13 01:19:26.730784] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.251 [2024-10-13 01:19:26.730795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.251 [2024-10-13 01:19:26.732406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.251 [2024-10-13 01:19:26.732463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.251 [2024-10-13 01:19:26.732466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.509 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.509 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:41.509 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:41.509 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.509 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:41.509 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.509 01:19:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:41.779 [2024-10-13 01:19:27.140704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.779 01:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.095 01:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:42.095 01:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.353 01:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:42.353 01:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:42.610 01:19:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:42.867 01:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2c1edea5-de9c-4b0d-8ab3-ad76c9a31ee8 00:08:42.867 01:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2c1edea5-de9c-4b0d-8ab3-ad76c9a31ee8 lvol 20 00:08:43.125 01:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c787787f-38f5-4aae-88d8-7b536d867577 00:08:43.125 01:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:43.383 01:19:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c787787f-38f5-4aae-88d8-7b536d867577 00:08:43.641 01:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:43.899 [2024-10-13 01:19:29.357027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.899 01:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.156 01:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1491480 00:08:44.156 01:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:44.156 01:19:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:45.090 01:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c787787f-38f5-4aae-88d8-7b536d867577 MY_SNAPSHOT 00:08:45.656 01:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3b01f91b-3151-4e5f-a4c3-740fd1322dab 00:08:45.656 01:19:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c787787f-38f5-4aae-88d8-7b536d867577 30 00:08:45.914 01:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3b01f91b-3151-4e5f-a4c3-740fd1322dab MY_CLONE 00:08:46.172 01:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b81424cd-7cff-4e05-96c6-d14cdb15625e 00:08:46.172 01:19:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b81424cd-7cff-4e05-96c6-d14cdb15625e 00:08:46.738 01:19:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1491480 00:08:54.844 Initializing NVMe Controllers 00:08:54.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:54.844 Controller IO queue size 128, less than required. 00:08:54.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:54.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:54.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:54.844 Initialization complete. Launching workers. 00:08:54.844 ======================================================== 00:08:54.844 Latency(us) 00:08:54.844 Device Information : IOPS MiB/s Average min max 00:08:54.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9863.10 38.53 12978.76 565.77 88269.63 00:08:54.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10346.70 40.42 12372.10 2131.19 69178.58 00:08:54.845 ======================================================== 00:08:54.845 Total : 20209.80 78.94 12668.17 565.77 88269.63 00:08:54.845 00:08:54.845 01:19:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:54.845 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c787787f-38f5-4aae-88d8-7b536d867577 00:08:55.102 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c1edea5-de9c-4b0d-8ab3-ad76c9a31ee8 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.361 rmmod nvme_tcp 00:08:55.361 rmmod nvme_fabrics 00:08:55.361 rmmod nvme_keyring 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1491046 ']' 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1491046 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1491046 ']' 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1491046 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1491046 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1491046' 00:08:55.361 killing process with pid 1491046 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1491046 00:08:55.361 01:19:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1491046 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.620 01:19:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.157 00:08:58.157 real 0m19.063s 00:08:58.157 user 1m4.456s 00:08:58.157 sys 0m5.712s 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.157 ************************************ 00:08:58.157 END TEST nvmf_lvol 00:08:58.157 ************************************ 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.157 ************************************ 00:08:58.157 START TEST nvmf_lvs_grow 00:08:58.157 ************************************ 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.157 * Looking for test storage... 00:08:58.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.157 --rc genhtml_branch_coverage=1 00:08:58.157 --rc genhtml_function_coverage=1 00:08:58.157 --rc genhtml_legend=1 00:08:58.157 --rc geninfo_all_blocks=1 00:08:58.157 --rc geninfo_unexecuted_blocks=1 00:08:58.157 00:08:58.157 ' 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.157 --rc genhtml_branch_coverage=1 00:08:58.157 --rc genhtml_function_coverage=1 00:08:58.157 --rc genhtml_legend=1 00:08:58.157 --rc geninfo_all_blocks=1 00:08:58.157 --rc geninfo_unexecuted_blocks=1 00:08:58.157 00:08:58.157 ' 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.157 --rc genhtml_branch_coverage=1 00:08:58.157 --rc genhtml_function_coverage=1 00:08:58.157 --rc genhtml_legend=1 00:08:58.157 --rc geninfo_all_blocks=1 00:08:58.157 --rc geninfo_unexecuted_blocks=1 00:08:58.157 00:08:58.157 ' 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.157 --rc genhtml_branch_coverage=1 00:08:58.157 --rc genhtml_function_coverage=1 00:08:58.157 --rc genhtml_legend=1 00:08:58.157 --rc geninfo_all_blocks=1 00:08:58.157 --rc geninfo_unexecuted_blocks=1 00:08:58.157 00:08:58.157 ' 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.157 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.158 01:19:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:00.061 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:00.061 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:00.061 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:00.061 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.061 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:09:00.320 00:09:00.320 --- 10.0.0.2 ping statistics --- 00:09:00.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.320 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:09:00.320 00:09:00.320 --- 10.0.0.1 ping statistics --- 00:09:00.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.320 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1494768 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1494768 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1494768 ']' 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.320 01:19:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.320 [2024-10-13 01:19:45.772676] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:00.320 [2024-10-13 01:19:45.772760] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.321 [2024-10-13 01:19:45.838091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.321 [2024-10-13 01:19:45.885340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.321 [2024-10-13 01:19:45.885396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.321 [2024-10-13 01:19:45.885425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.321 [2024-10-13 01:19:45.885445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.321 [2024-10-13 01:19:45.885455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.321 [2024-10-13 01:19:45.886116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.579 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.579 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:00.579 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:00.579 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:00.579 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.579 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.579 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:00.837 [2024-10-13 01:19:46.287554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.837 ************************************ 00:09:00.837 START TEST lvs_grow_clean 00:09:00.837 ************************************ 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:00.837 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.095 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:01.095 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:01.353 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:01.353 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:01.353 01:19:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:01.610 01:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:01.610 01:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:01.611 01:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e698f5b0-0c69-4be3-ab8f-875857468a93 lvol 150 00:09:02.176 01:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2 00:09:02.176 01:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:02.176 01:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:02.176 [2024-10-13 01:19:47.714819] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:02.176 [2024-10-13 01:19:47.714907] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:02.176 true 00:09:02.176 01:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:02.176 01:19:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:02.435 01:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:02.435 01:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:03.001 01:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2 00:09:03.001 01:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:03.259 [2024-10-13 01:19:48.794168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.259 01:19:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1495208 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1495208 /var/tmp/bdevperf.sock 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1495208 ']' 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.517 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:03.775 [2024-10-13 01:19:49.121763] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:03.775 [2024-10-13 01:19:49.121862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495208 ] 00:09:03.775 [2024-10-13 01:19:49.183095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.775 [2024-10-13 01:19:49.231720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.775 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.775 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:03.775 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:04.341 Nvme0n1 00:09:04.341 01:19:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:04.600 [ 00:09:04.600 { 00:09:04.600 "name": "Nvme0n1", 00:09:04.600 "aliases": [ 00:09:04.600 "13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2" 00:09:04.600 ], 00:09:04.600 "product_name": "NVMe disk", 00:09:04.600 "block_size": 4096, 00:09:04.600 "num_blocks": 38912, 00:09:04.600 "uuid": "13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2", 00:09:04.600 "numa_id": 0, 00:09:04.600 "assigned_rate_limits": { 00:09:04.600 "rw_ios_per_sec": 0, 00:09:04.600 "rw_mbytes_per_sec": 0, 00:09:04.600 "r_mbytes_per_sec": 0, 00:09:04.600 "w_mbytes_per_sec": 0 00:09:04.600 }, 00:09:04.600 "claimed": false, 00:09:04.600 "zoned": false, 00:09:04.600 "supported_io_types": { 00:09:04.600 "read": true, 00:09:04.600 "write": true, 00:09:04.600 "unmap": true, 00:09:04.600 "flush": true, 00:09:04.600 "reset": true, 00:09:04.600 "nvme_admin": true, 00:09:04.600 "nvme_io": true, 00:09:04.600 "nvme_io_md": false, 00:09:04.600 "write_zeroes": true, 00:09:04.600 "zcopy": false, 00:09:04.600 "get_zone_info": false, 00:09:04.600 "zone_management": false, 00:09:04.600 "zone_append": false, 00:09:04.600 "compare": true, 00:09:04.600 "compare_and_write": true, 00:09:04.600 "abort": true, 00:09:04.600 "seek_hole": false, 00:09:04.600 "seek_data": false, 00:09:04.600 "copy": true, 00:09:04.600 "nvme_iov_md": false 00:09:04.600 }, 00:09:04.600 "memory_domains": [ 00:09:04.600 { 00:09:04.600 "dma_device_id": "system", 00:09:04.600 "dma_device_type": 1 00:09:04.600 } 00:09:04.600 ], 00:09:04.600 "driver_specific": { 00:09:04.600 "nvme": [ 00:09:04.600 { 00:09:04.600 "trid": { 00:09:04.600 "trtype": "TCP", 00:09:04.600 "adrfam": "IPv4", 00:09:04.600 "traddr": "10.0.0.2", 00:09:04.600 "trsvcid": "4420", 00:09:04.600 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:04.600 }, 00:09:04.600 "ctrlr_data": { 00:09:04.600 "cntlid": 1, 00:09:04.600 "vendor_id": "0x8086", 00:09:04.600 "model_number": "SPDK bdev Controller", 00:09:04.600 "serial_number": "SPDK0", 00:09:04.600 "firmware_revision": "25.01", 00:09:04.600 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:04.600 "oacs": { 00:09:04.600 "security": 0, 00:09:04.600 "format": 0, 00:09:04.600 "firmware": 0, 00:09:04.600 "ns_manage": 0 00:09:04.600 }, 00:09:04.600 "multi_ctrlr": true, 00:09:04.600 "ana_reporting": false 00:09:04.600 }, 00:09:04.600 "vs": { 00:09:04.600 "nvme_version": "1.3" 00:09:04.600 }, 00:09:04.600 "ns_data": { 00:09:04.600 "id": 1, 00:09:04.600 "can_share": true 00:09:04.600 } 00:09:04.600 } 00:09:04.600 ], 00:09:04.600 "mp_policy": "active_passive" 00:09:04.600 } 00:09:04.600 } 00:09:04.600 ] 00:09:04.600 01:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1495345 00:09:04.600 01:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.600 01:19:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:04.858 Running I/O for 10 seconds... 00:09:05.792 Latency(us) 00:09:05.792 [2024-10-12T23:19:51.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.792 Nvme0n1 : 1.00 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:09:05.792 [2024-10-12T23:19:51.370Z] =================================================================================================================== 00:09:05.792 [2024-10-12T23:19:51.370Z] Total : 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:09:05.792 00:09:06.725 01:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:06.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.725 Nvme0n1 : 2.00 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:09:06.725 [2024-10-12T23:19:52.303Z] =================================================================================================================== 00:09:06.725 [2024-10-12T23:19:52.303Z] Total : 14034.00 54.82 0.00 0.00 0.00 0.00 0.00 00:09:06.725 00:09:06.984 true 00:09:06.984 01:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:06.984 01:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:07.241 01:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:07.241 01:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:07.241 01:19:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1495345 00:09:07.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.807 Nvme0n1 : 3.00 14183.00 55.40 0.00 0.00 0.00 0.00 0.00 00:09:07.807 [2024-10-12T23:19:53.385Z] =================================================================================================================== 00:09:07.807 [2024-10-12T23:19:53.385Z] Total : 14183.00 55.40 0.00 0.00 0.00 0.00 0.00 00:09:07.807 00:09:08.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.749 Nvme0n1 : 4.00 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:09:08.749 [2024-10-12T23:19:54.327Z] =================================================================================================================== 00:09:08.749 [2024-10-12T23:19:54.327Z] Total : 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:09:08.749 00:09:09.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.685 Nvme0n1 : 5.00 14301.00 55.86 0.00 0.00 0.00 0.00 0.00 00:09:09.685 [2024-10-12T23:19:55.263Z] =================================================================================================================== 00:09:09.685 [2024-10-12T23:19:55.263Z] Total : 14301.00 55.86 0.00 0.00 0.00 0.00 0.00 00:09:09.685 00:09:11.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.059 Nvme0n1 : 6.00 14330.50 55.98 0.00 0.00 0.00 0.00 0.00 00:09:11.059 [2024-10-12T23:19:56.637Z] =================================================================================================================== 00:09:11.059 [2024-10-12T23:19:56.637Z] Total : 14330.50 55.98 0.00 0.00 0.00 0.00 0.00 00:09:11.060 00:09:11.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.994 Nvme0n1 : 7.00 14387.86 56.20 0.00 0.00 0.00 0.00 0.00 00:09:11.994 [2024-10-12T23:19:57.572Z] =================================================================================================================== 00:09:11.994 [2024-10-12T23:19:57.572Z] Total : 14387.86 56.20 0.00 0.00 0.00 0.00 0.00 00:09:11.994 00:09:12.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.937 Nvme0n1 : 8.00 14415.00 56.31 0.00 0.00 0.00 0.00 0.00 00:09:12.937 [2024-10-12T23:19:58.515Z] =================================================================================================================== 00:09:12.937 [2024-10-12T23:19:58.515Z] Total : 14415.00 56.31 0.00 0.00 0.00 0.00 0.00 00:09:12.937 00:09:13.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.950 Nvme0n1 : 9.00 14450.22 56.45 0.00 0.00 0.00 0.00 0.00 00:09:13.950 [2024-10-12T23:19:59.528Z] =================================================================================================================== 00:09:13.950 [2024-10-12T23:19:59.528Z] Total : 14450.22 56.45 0.00 0.00 0.00 0.00 0.00 00:09:13.950 00:09:14.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.884 Nvme0n1 : 10.00 14473.90 56.54 0.00 0.00 0.00 0.00 0.00 00:09:14.884 [2024-10-12T23:20:00.462Z] =================================================================================================================== 00:09:14.884 [2024-10-12T23:20:00.462Z] Total : 14473.90 56.54 0.00 0.00 0.00 0.00 0.00 00:09:14.884 00:09:14.884 00:09:14.884 Latency(us) 00:09:14.884 [2024-10-12T23:20:00.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.884 Nvme0n1 : 10.00 14478.97 56.56 0.00 0.00 8835.40 2415.12 17379.18 00:09:14.884 [2024-10-12T23:20:00.462Z] =================================================================================================================== 00:09:14.884 [2024-10-12T23:20:00.462Z] Total : 14478.97 56.56 0.00 0.00 8835.40 2415.12 17379.18 00:09:14.884 { 00:09:14.884 "results": [ 00:09:14.884 { 00:09:14.884 "job": "Nvme0n1", 00:09:14.884 "core_mask": "0x2", 00:09:14.884 "workload": "randwrite", 00:09:14.884 "status": "finished", 00:09:14.884 "queue_depth": 128, 00:09:14.884 "io_size": 4096, 00:09:14.884 "runtime": 10.004165, 00:09:14.884 "iops": 14478.969509199418, 00:09:14.884 "mibps": 56.55847464531023, 00:09:14.884 "io_failed": 0, 00:09:14.884 "io_timeout": 0, 00:09:14.884 "avg_latency_us": 8835.39504457996, 00:09:14.884 "min_latency_us": 2415.122962962963, 00:09:14.884 "max_latency_us": 17379.176296296297 00:09:14.884 } 00:09:14.884 ], 00:09:14.884 "core_count": 1 00:09:14.884 } 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1495208 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1495208 ']' 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1495208 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1495208 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1495208' 00:09:14.884 killing process with pid 1495208 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1495208 00:09:14.884 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.884 00:09:14.884 Latency(us) 00:09:14.884 [2024-10-12T23:20:00.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.884 [2024-10-12T23:20:00.462Z] =================================================================================================================== 00:09:14.884 [2024-10-12T23:20:00.462Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.884 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1495208 00:09:15.142 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:15.400 01:20:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:15.658 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:15.658 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:15.916 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:15.916 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:15.916 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.173 [2024-10-13 01:20:01.589354] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:16.173 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:16.173 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:16.173 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:16.173 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.174 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.174 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.174 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.174 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.174 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:16.174 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.174 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:16.174 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:16.432 request: 00:09:16.432 { 00:09:16.432 "uuid": "e698f5b0-0c69-4be3-ab8f-875857468a93", 00:09:16.432 "method": "bdev_lvol_get_lvstores", 00:09:16.432 "req_id": 1 00:09:16.432 } 00:09:16.432 Got JSON-RPC error response 00:09:16.432 response: 00:09:16.432 { 00:09:16.432 "code": -19, 00:09:16.432 "message": "No such device" 00:09:16.432 } 00:09:16.432 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:16.432 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:16.432 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:16.432 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:16.432 01:20:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.689 aio_bdev 00:09:16.689 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2 00:09:16.689 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2 00:09:16.689 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:16.689 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:16.690 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:16.690 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:16.690 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:16.947 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2 -t 2000 00:09:17.205 [ 00:09:17.205 { 00:09:17.205 "name": "13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2", 00:09:17.205 "aliases": [ 00:09:17.205 "lvs/lvol" 00:09:17.205 ], 00:09:17.205 "product_name": "Logical Volume", 00:09:17.205 "block_size": 4096, 00:09:17.205 "num_blocks": 38912, 00:09:17.205 "uuid": "13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2", 00:09:17.205 "assigned_rate_limits": { 00:09:17.205 "rw_ios_per_sec": 0, 00:09:17.205 "rw_mbytes_per_sec": 0, 00:09:17.205 "r_mbytes_per_sec": 0, 00:09:17.205 "w_mbytes_per_sec": 0 00:09:17.205 }, 00:09:17.205 "claimed": false, 00:09:17.205 "zoned": false, 00:09:17.205 "supported_io_types": { 00:09:17.205 "read": true, 00:09:17.205 "write": true, 00:09:17.205 "unmap": true, 00:09:17.205 "flush": false, 00:09:17.205 "reset": true, 00:09:17.205 "nvme_admin": false, 00:09:17.205 "nvme_io": false, 00:09:17.205 "nvme_io_md": false, 00:09:17.205 "write_zeroes": true, 00:09:17.205 "zcopy": false, 00:09:17.205 "get_zone_info": false, 00:09:17.205 "zone_management": false, 00:09:17.205 "zone_append": false, 00:09:17.205 "compare": false, 00:09:17.205 "compare_and_write": false, 00:09:17.205 "abort": false, 00:09:17.205 "seek_hole": true, 00:09:17.205 "seek_data": true, 00:09:17.205 "copy": false, 00:09:17.205 "nvme_iov_md": false 00:09:17.205 }, 00:09:17.205 "driver_specific": { 00:09:17.205 "lvol": { 00:09:17.205 "lvol_store_uuid": "e698f5b0-0c69-4be3-ab8f-875857468a93", 00:09:17.205 "base_bdev": "aio_bdev", 00:09:17.205 "thin_provision": false, 00:09:17.205 "num_allocated_clusters": 38, 00:09:17.205 "snapshot": false, 00:09:17.205 "clone": false, 00:09:17.205 "esnap_clone": false 00:09:17.205 } 00:09:17.205 } 00:09:17.205 } 00:09:17.205 ] 00:09:17.205 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:17.205 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:17.206 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:17.464 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:17.464 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:17.464 01:20:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:17.722 01:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:17.722 01:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 13c13510-1e8d-4ef9-ba7d-3658a3bc1cb2 00:09:17.980 01:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e698f5b0-0c69-4be3-ab8f-875857468a93 00:09:18.547 01:20:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:18.547 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.805 00:09:18.805 real 0m17.797s 00:09:18.805 user 0m17.258s 00:09:18.805 sys 0m1.891s 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:18.806 ************************************ 00:09:18.806 END TEST lvs_grow_clean 00:09:18.806 ************************************ 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.806 ************************************ 00:09:18.806 START TEST lvs_grow_dirty 00:09:18.806 ************************************ 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.806 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:19.063 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:19.063 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:19.321 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:19.321 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:19.321 01:20:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:19.579 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:19.579 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:19.579 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cc7b1c80-7cb8-4318-af39-01999636c5f4 lvol 150 00:09:19.837 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6a5037d4-53d5-450d-b7e1-a69f95e2b813 00:09:19.837 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:19.837 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:20.095 [2024-10-13 01:20:05.641088] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:20.095 [2024-10-13 01:20:05.641186] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:20.095 true 00:09:20.095 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:20.095 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:20.353 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:20.353 01:20:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:20.919 01:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6a5037d4-53d5-450d-b7e1-a69f95e2b813 00:09:21.177 01:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:21.435 [2024-10-13 01:20:06.812698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.435 01:20:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1497398 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1497398 /var/tmp/bdevperf.sock 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1497398 ']' 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.693 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:21.693 [2024-10-13 01:20:07.140663] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:21.693 [2024-10-13 01:20:07.140753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1497398 ] 00:09:21.693 [2024-10-13 01:20:07.204272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.693 [2024-10-13 01:20:07.253788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.951 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.951 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:21.951 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:22.519 Nvme0n1 00:09:22.519 01:20:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:22.519 [ 00:09:22.519 { 00:09:22.519 "name": "Nvme0n1", 00:09:22.519 "aliases": [ 00:09:22.519 "6a5037d4-53d5-450d-b7e1-a69f95e2b813" 00:09:22.519 ], 00:09:22.519 "product_name": "NVMe disk", 00:09:22.519 "block_size": 4096, 00:09:22.519 "num_blocks": 38912, 00:09:22.519 "uuid": "6a5037d4-53d5-450d-b7e1-a69f95e2b813", 00:09:22.519 "numa_id": 0, 00:09:22.519 "assigned_rate_limits": { 00:09:22.519 "rw_ios_per_sec": 0, 00:09:22.519 "rw_mbytes_per_sec": 0, 00:09:22.519 "r_mbytes_per_sec": 0, 00:09:22.519 "w_mbytes_per_sec": 0 00:09:22.519 }, 00:09:22.519 "claimed": false, 00:09:22.519 "zoned": false, 00:09:22.519 "supported_io_types": { 00:09:22.519 "read": true, 00:09:22.519 "write": true, 00:09:22.519 "unmap": true, 00:09:22.519 "flush": true, 00:09:22.519 "reset": true, 00:09:22.519 "nvme_admin": true, 00:09:22.519 "nvme_io": true, 00:09:22.519 "nvme_io_md": false, 00:09:22.519 "write_zeroes": true, 00:09:22.519 "zcopy": false, 00:09:22.519 "get_zone_info": false, 00:09:22.519 "zone_management": false, 00:09:22.519 "zone_append": false, 00:09:22.519 "compare": true, 00:09:22.519 "compare_and_write": true, 00:09:22.519 "abort": true, 00:09:22.519 "seek_hole": false, 00:09:22.519 "seek_data": false, 00:09:22.519 "copy": true, 00:09:22.519 "nvme_iov_md": false 00:09:22.519 }, 00:09:22.519 "memory_domains": [ 00:09:22.519 { 00:09:22.519 "dma_device_id": "system", 00:09:22.519 "dma_device_type": 1 00:09:22.519 } 00:09:22.519 ], 00:09:22.519 "driver_specific": { 00:09:22.519 "nvme": [ 00:09:22.519 { 00:09:22.519 "trid": { 00:09:22.519 "trtype": "TCP", 00:09:22.519 "adrfam": "IPv4", 00:09:22.519 "traddr": "10.0.0.2", 00:09:22.519 "trsvcid": "4420", 00:09:22.519 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:22.519 }, 00:09:22.519 "ctrlr_data": { 00:09:22.519 "cntlid": 1, 00:09:22.519 "vendor_id": "0x8086", 00:09:22.519 "model_number": "SPDK bdev Controller", 00:09:22.519 "serial_number": "SPDK0", 00:09:22.519 "firmware_revision": "25.01", 00:09:22.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:22.519 "oacs": { 00:09:22.519 "security": 0, 00:09:22.519 "format": 0, 00:09:22.519 "firmware": 0, 00:09:22.519 "ns_manage": 0 00:09:22.519 }, 00:09:22.519 "multi_ctrlr": true, 00:09:22.519 "ana_reporting": false 00:09:22.519 }, 00:09:22.519 "vs": { 00:09:22.519 "nvme_version": "1.3" 00:09:22.519 }, 00:09:22.519 "ns_data": { 00:09:22.519 "id": 1, 00:09:22.519 "can_share": true 00:09:22.519 } 00:09:22.519 } 00:09:22.519 ], 00:09:22.519 "mp_policy": "active_passive" 00:09:22.519 } 00:09:22.519 } 00:09:22.519 ] 00:09:22.519 01:20:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1497536 00:09:22.519 01:20:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:22.519 01:20:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:22.778 Running I/O for 10 seconds... 00:09:23.711 Latency(us) 00:09:23.711 [2024-10-12T23:20:09.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.711 Nvme0n1 : 1.00 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:09:23.711 [2024-10-12T23:20:09.289Z] =================================================================================================================== 00:09:23.711 [2024-10-12T23:20:09.289Z] Total : 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:09:23.711 00:09:24.644 01:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:24.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.644 Nvme0n1 : 2.00 13939.50 54.45 0.00 0.00 0.00 0.00 0.00 00:09:24.644 [2024-10-12T23:20:10.222Z] =================================================================================================================== 00:09:24.644 [2024-10-12T23:20:10.222Z] Total : 13939.50 54.45 0.00 0.00 0.00 0.00 0.00 00:09:24.644 00:09:24.901 true 00:09:24.901 01:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:24.901 01:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:25.159 01:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:25.159 01:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:25.159 01:20:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1497536 00:09:25.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.725 Nvme0n1 : 3.00 14076.67 54.99 0.00 0.00 0.00 0.00 0.00 00:09:25.725 [2024-10-12T23:20:11.303Z] =================================================================================================================== 00:09:25.725 [2024-10-12T23:20:11.303Z] Total : 14076.67 54.99 0.00 0.00 0.00 0.00 0.00 00:09:25.725 00:09:26.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.658 Nvme0n1 : 4.00 14208.75 55.50 0.00 0.00 0.00 0.00 0.00 00:09:26.658 [2024-10-12T23:20:12.236Z] =================================================================================================================== 00:09:26.658 [2024-10-12T23:20:12.236Z] Total : 14208.75 55.50 0.00 0.00 0.00 0.00 0.00 00:09:26.658 00:09:28.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.031 Nvme0n1 : 5.00 14288.00 55.81 0.00 0.00 0.00 0.00 0.00 00:09:28.031 [2024-10-12T23:20:13.609Z] =================================================================================================================== 00:09:28.031 [2024-10-12T23:20:13.609Z] Total : 14288.00 55.81 0.00 0.00 0.00 0.00 0.00 00:09:28.031 00:09:28.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.966 Nvme0n1 : 6.00 14362.00 56.10 0.00 0.00 0.00 0.00 0.00 00:09:28.966 [2024-10-12T23:20:14.544Z] =================================================================================================================== 00:09:28.966 [2024-10-12T23:20:14.544Z] Total : 14362.00 56.10 0.00 0.00 0.00 0.00 0.00 00:09:28.966 00:09:29.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.900 Nvme0n1 : 7.00 14378.57 56.17 0.00 0.00 0.00 0.00 0.00 00:09:29.900 [2024-10-12T23:20:15.478Z] =================================================================================================================== 00:09:29.900 [2024-10-12T23:20:15.478Z] Total : 14378.57 56.17 0.00 0.00 0.00 0.00 0.00 00:09:29.900 00:09:30.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.833 Nvme0n1 : 8.00 14406.88 56.28 0.00 0.00 0.00 0.00 0.00 00:09:30.833 [2024-10-12T23:20:16.411Z] =================================================================================================================== 00:09:30.833 [2024-10-12T23:20:16.411Z] Total : 14406.88 56.28 0.00 0.00 0.00 0.00 0.00 00:09:30.833 00:09:31.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.767 Nvme0n1 : 9.00 14443.00 56.42 0.00 0.00 0.00 0.00 0.00 00:09:31.767 [2024-10-12T23:20:17.345Z] =================================================================================================================== 00:09:31.767 [2024-10-12T23:20:17.345Z] Total : 14443.00 56.42 0.00 0.00 0.00 0.00 0.00 00:09:31.767 00:09:32.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.700 Nvme0n1 : 10.00 14472.20 56.53 0.00 0.00 0.00 0.00 0.00 00:09:32.700 [2024-10-12T23:20:18.278Z] =================================================================================================================== 00:09:32.700 [2024-10-12T23:20:18.278Z] Total : 14472.20 56.53 0.00 0.00 0.00 0.00 0.00 00:09:32.700 00:09:32.700 00:09:32.700 Latency(us) 00:09:32.700 [2024-10-12T23:20:18.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.700 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.700 Nvme0n1 : 10.01 14476.37 56.55 0.00 0.00 8837.33 5194.33 16699.54 00:09:32.700 [2024-10-12T23:20:18.278Z] =================================================================================================================== 00:09:32.700 [2024-10-12T23:20:18.278Z] Total : 14476.37 56.55 0.00 0.00 8837.33 5194.33 16699.54 00:09:32.700 { 00:09:32.700 "results": [ 00:09:32.700 { 00:09:32.700 "job": "Nvme0n1", 00:09:32.700 "core_mask": "0x2", 00:09:32.700 "workload": "randwrite", 00:09:32.700 "status": "finished", 00:09:32.700 "queue_depth": 128, 00:09:32.700 "io_size": 4096, 00:09:32.700 "runtime": 10.005963, 00:09:32.700 "iops": 14476.367741915496, 00:09:32.700 "mibps": 56.548311491857405, 00:09:32.700 "io_failed": 0, 00:09:32.700 "io_timeout": 0, 00:09:32.700 "avg_latency_us": 8837.328138319335, 00:09:32.700 "min_latency_us": 5194.334814814815, 00:09:32.700 "max_latency_us": 16699.543703703705 00:09:32.700 } 00:09:32.700 ], 00:09:32.700 "core_count": 1 00:09:32.700 } 00:09:32.700 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1497398 00:09:32.700 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1497398 ']' 00:09:32.700 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1497398 00:09:32.700 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:32.700 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.700 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1497398 00:09:32.958 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:32.958 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:32.958 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1497398' 00:09:32.958 killing process with pid 1497398 00:09:32.958 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1497398 00:09:32.958 Received shutdown signal, test time was about 10.000000 seconds 00:09:32.958 00:09:32.958 Latency(us) 00:09:32.958 [2024-10-12T23:20:18.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.958 [2024-10-12T23:20:18.536Z] =================================================================================================================== 00:09:32.958 [2024-10-12T23:20:18.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.958 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1497398 00:09:32.958 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.216 01:20:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:33.474 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:33.474 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:33.733 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:33.733 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:33.733 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1494768 00:09:33.733 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1494768 00:09:33.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1494768 Killed "${NVMF_APP[@]}" "$@" 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1498870 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1498870 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1498870 ']' 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.991 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.991 [2024-10-13 01:20:19.390855] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:33.991 [2024-10-13 01:20:19.390955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.991 [2024-10-13 01:20:19.456011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.991 [2024-10-13 01:20:19.502577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.991 [2024-10-13 01:20:19.502633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.991 [2024-10-13 01:20:19.502661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.991 [2024-10-13 01:20:19.502672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.991 [2024-10-13 01:20:19.502682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.991 [2024-10-13 01:20:19.503297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.250 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.250 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:34.250 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:34.250 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.250 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:34.250 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.250 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.508 [2024-10-13 01:20:19.902580] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:34.508 [2024-10-13 01:20:19.902759] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:34.508 [2024-10-13 01:20:19.902818] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:34.508 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:34.508 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6a5037d4-53d5-450d-b7e1-a69f95e2b813 00:09:34.508 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6a5037d4-53d5-450d-b7e1-a69f95e2b813 00:09:34.508 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.508 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:34.508 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.508 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.508 01:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:34.766 01:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6a5037d4-53d5-450d-b7e1-a69f95e2b813 -t 2000 00:09:35.024 [ 00:09:35.024 { 00:09:35.024 "name": "6a5037d4-53d5-450d-b7e1-a69f95e2b813", 00:09:35.024 "aliases": [ 00:09:35.024 "lvs/lvol" 00:09:35.024 ], 00:09:35.024 "product_name": "Logical Volume", 00:09:35.024 "block_size": 4096, 00:09:35.024 "num_blocks": 38912, 00:09:35.024 "uuid": "6a5037d4-53d5-450d-b7e1-a69f95e2b813", 00:09:35.024 "assigned_rate_limits": { 00:09:35.024 "rw_ios_per_sec": 0, 00:09:35.024 "rw_mbytes_per_sec": 0, 00:09:35.024 "r_mbytes_per_sec": 0, 00:09:35.024 "w_mbytes_per_sec": 0 00:09:35.024 }, 00:09:35.024 "claimed": false, 00:09:35.024 "zoned": false, 00:09:35.024 "supported_io_types": { 00:09:35.024 "read": true, 00:09:35.024 "write": true, 00:09:35.024 "unmap": true, 00:09:35.024 "flush": false, 00:09:35.024 "reset": true, 00:09:35.024 "nvme_admin": false, 00:09:35.024 "nvme_io": false, 00:09:35.024 "nvme_io_md": false, 00:09:35.025 "write_zeroes": true, 00:09:35.025 "zcopy": false, 00:09:35.025 "get_zone_info": false, 00:09:35.025 "zone_management": false, 00:09:35.025 "zone_append": false, 00:09:35.025 "compare": false, 00:09:35.025 "compare_and_write": false, 00:09:35.025 "abort": false, 00:09:35.025 "seek_hole": true, 00:09:35.025 "seek_data": true, 00:09:35.025 "copy": false, 00:09:35.025 "nvme_iov_md": false 00:09:35.025 }, 00:09:35.025 "driver_specific": { 00:09:35.025 "lvol": { 00:09:35.025 "lvol_store_uuid": "cc7b1c80-7cb8-4318-af39-01999636c5f4", 00:09:35.025 "base_bdev": "aio_bdev", 00:09:35.025 "thin_provision": false, 00:09:35.025 "num_allocated_clusters": 38, 00:09:35.025 "snapshot": false, 00:09:35.025 "clone": false, 00:09:35.025 "esnap_clone": false 00:09:35.025 } 00:09:35.025 } 00:09:35.025 } 00:09:35.025 ] 00:09:35.025 01:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:35.025 01:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:35.025 01:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:35.283 01:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:35.283 01:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:35.283 01:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:35.541 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:35.541 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:35.799 [2024-10-13 01:20:21.291998] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:35.799 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:36.057 request: 00:09:36.057 { 00:09:36.057 "uuid": "cc7b1c80-7cb8-4318-af39-01999636c5f4", 00:09:36.057 "method": "bdev_lvol_get_lvstores", 00:09:36.057 "req_id": 1 00:09:36.057 } 00:09:36.057 Got JSON-RPC error response 00:09:36.057 response: 00:09:36.057 { 00:09:36.057 "code": -19, 00:09:36.057 "message": "No such device" 00:09:36.057 } 00:09:36.057 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:36.057 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.057 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.057 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.057 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:36.315 aio_bdev 00:09:36.315 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6a5037d4-53d5-450d-b7e1-a69f95e2b813 00:09:36.315 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=6a5037d4-53d5-450d-b7e1-a69f95e2b813 00:09:36.315 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.315 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:36.315 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.315 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.315 01:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.573 01:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6a5037d4-53d5-450d-b7e1-a69f95e2b813 -t 2000 00:09:36.831 [ 00:09:36.831 { 00:09:36.831 "name": "6a5037d4-53d5-450d-b7e1-a69f95e2b813", 00:09:36.831 "aliases": [ 00:09:36.831 "lvs/lvol" 00:09:36.831 ], 00:09:36.831 "product_name": "Logical Volume", 00:09:36.831 "block_size": 4096, 00:09:36.831 "num_blocks": 38912, 00:09:36.831 "uuid": "6a5037d4-53d5-450d-b7e1-a69f95e2b813", 00:09:36.831 "assigned_rate_limits": { 00:09:36.831 "rw_ios_per_sec": 0, 00:09:36.831 "rw_mbytes_per_sec": 0, 00:09:36.831 "r_mbytes_per_sec": 0, 00:09:36.831 "w_mbytes_per_sec": 0 00:09:36.831 }, 00:09:36.831 "claimed": false, 00:09:36.831 "zoned": false, 00:09:36.831 "supported_io_types": { 00:09:36.831 "read": true, 00:09:36.831 "write": true, 00:09:36.831 "unmap": true, 00:09:36.831 "flush": false, 00:09:36.831 "reset": true, 00:09:36.831 "nvme_admin": false, 00:09:36.831 "nvme_io": false, 00:09:36.831 "nvme_io_md": false, 00:09:36.831 "write_zeroes": true, 00:09:36.831 "zcopy": false, 00:09:36.831 "get_zone_info": false, 00:09:36.831 "zone_management": false, 00:09:36.831 "zone_append": false, 00:09:36.831 "compare": false, 00:09:36.831 "compare_and_write": false, 00:09:36.831 "abort": false, 00:09:36.831 "seek_hole": true, 00:09:36.831 "seek_data": true, 00:09:36.831 "copy": false, 00:09:36.831 "nvme_iov_md": false 00:09:36.831 }, 00:09:36.831 "driver_specific": { 00:09:36.831 "lvol": { 00:09:36.831 "lvol_store_uuid": "cc7b1c80-7cb8-4318-af39-01999636c5f4", 00:09:36.831 "base_bdev": "aio_bdev", 00:09:36.831 "thin_provision": false, 00:09:36.831 "num_allocated_clusters": 38, 00:09:36.831 "snapshot": false, 00:09:36.831 "clone": false, 00:09:36.831 "esnap_clone": false 00:09:36.831 } 00:09:36.831 } 00:09:36.831 } 00:09:36.831 ] 00:09:36.831 01:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:37.089 01:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:37.089 01:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:37.347 01:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:37.347 01:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:37.347 01:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:37.605 01:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:37.605 01:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6a5037d4-53d5-450d-b7e1-a69f95e2b813 00:09:37.864 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc7b1c80-7cb8-4318-af39-01999636c5f4 00:09:38.130 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.441 00:09:38.441 real 0m19.665s 00:09:38.441 user 0m49.689s 00:09:38.441 sys 0m4.604s 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.441 ************************************ 00:09:38.441 END TEST lvs_grow_dirty 00:09:38.441 ************************************ 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:38.441 nvmf_trace.0 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.441 rmmod nvme_tcp 00:09:38.441 rmmod nvme_fabrics 00:09:38.441 rmmod nvme_keyring 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1498870 ']' 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1498870 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1498870 ']' 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1498870 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.441 01:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1498870 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1498870' 00:09:38.723 killing process with pid 1498870 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1498870 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1498870 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.723 01:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:41.257 00:09:41.257 real 0m42.978s 00:09:41.257 user 1m13.071s 00:09:41.257 sys 0m8.495s 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:41.257 ************************************ 00:09:41.257 END TEST nvmf_lvs_grow 00:09:41.257 ************************************ 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.257 ************************************ 00:09:41.257 START TEST nvmf_bdev_io_wait 00:09:41.257 ************************************ 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.257 * Looking for test storage... 00:09:41.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.257 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:41.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.258 --rc genhtml_branch_coverage=1 00:09:41.258 --rc genhtml_function_coverage=1 00:09:41.258 --rc genhtml_legend=1 00:09:41.258 --rc geninfo_all_blocks=1 00:09:41.258 --rc geninfo_unexecuted_blocks=1 00:09:41.258 00:09:41.258 ' 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:41.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.258 --rc genhtml_branch_coverage=1 00:09:41.258 --rc genhtml_function_coverage=1 00:09:41.258 --rc genhtml_legend=1 00:09:41.258 --rc geninfo_all_blocks=1 00:09:41.258 --rc geninfo_unexecuted_blocks=1 00:09:41.258 00:09:41.258 ' 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:41.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.258 --rc genhtml_branch_coverage=1 00:09:41.258 --rc genhtml_function_coverage=1 00:09:41.258 --rc genhtml_legend=1 00:09:41.258 --rc geninfo_all_blocks=1 00:09:41.258 --rc geninfo_unexecuted_blocks=1 00:09:41.258 00:09:41.258 ' 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:41.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.258 --rc genhtml_branch_coverage=1 00:09:41.258 --rc genhtml_function_coverage=1 00:09:41.258 --rc genhtml_legend=1 00:09:41.258 --rc geninfo_all_blocks=1 00:09:41.258 --rc geninfo_unexecuted_blocks=1 00:09:41.258 00:09:41.258 ' 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:41.258 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:41.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:41.259 01:20:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:43.158 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:43.158 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:43.158 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:43.158 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:43.159 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:43.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:09:43.159 00:09:43.159 --- 10.0.0.2 ping statistics --- 00:09:43.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.159 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:09:43.159 00:09:43.159 --- 10.0.0.1 ping statistics --- 00:09:43.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.159 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1501414 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1501414 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1501414 ']' 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.159 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.159 [2024-10-13 01:20:28.591912] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:43.159 [2024-10-13 01:20:28.592013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.159 [2024-10-13 01:20:28.664366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.159 [2024-10-13 01:20:28.717798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.159 [2024-10-13 01:20:28.717859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.159 [2024-10-13 01:20:28.717876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.159 [2024-10-13 01:20:28.717890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.159 [2024-10-13 01:20:28.717901] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.159 [2024-10-13 01:20:28.719623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.159 [2024-10-13 01:20:28.719680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.159 [2024-10-13 01:20:28.719719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.159 [2024-10-13 01:20:28.719722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.417 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.418 [2024-10-13 01:20:28.960019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.418 Malloc0 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.418 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.676 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.676 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.676 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.676 01:20:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.676 [2024-10-13 01:20:29.012672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1501440 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1501442 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:43.676 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:43.677 { 00:09:43.677 "params": { 00:09:43.677 "name": "Nvme$subsystem", 00:09:43.677 "trtype": "$TEST_TRANSPORT", 00:09:43.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.677 "adrfam": "ipv4", 00:09:43.677 "trsvcid": "$NVMF_PORT", 00:09:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.677 "hdgst": ${hdgst:-false}, 00:09:43.677 "ddgst": ${ddgst:-false} 00:09:43.677 }, 00:09:43.677 "method": "bdev_nvme_attach_controller" 00:09:43.677 } 00:09:43.677 EOF 00:09:43.677 )") 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1501444 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:43.677 { 00:09:43.677 "params": { 00:09:43.677 "name": "Nvme$subsystem", 00:09:43.677 "trtype": "$TEST_TRANSPORT", 00:09:43.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.677 "adrfam": "ipv4", 00:09:43.677 "trsvcid": "$NVMF_PORT", 00:09:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.677 "hdgst": ${hdgst:-false}, 00:09:43.677 "ddgst": ${ddgst:-false} 00:09:43.677 }, 00:09:43.677 "method": "bdev_nvme_attach_controller" 00:09:43.677 } 00:09:43.677 EOF 00:09:43.677 )") 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1501447 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:43.677 { 00:09:43.677 "params": { 00:09:43.677 "name": "Nvme$subsystem", 00:09:43.677 "trtype": "$TEST_TRANSPORT", 00:09:43.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.677 "adrfam": "ipv4", 00:09:43.677 "trsvcid": "$NVMF_PORT", 00:09:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.677 "hdgst": ${hdgst:-false}, 00:09:43.677 "ddgst": ${ddgst:-false} 00:09:43.677 }, 00:09:43.677 "method": "bdev_nvme_attach_controller" 00:09:43.677 } 00:09:43.677 EOF 00:09:43.677 )") 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:43.677 { 00:09:43.677 "params": { 00:09:43.677 "name": "Nvme$subsystem", 00:09:43.677 "trtype": "$TEST_TRANSPORT", 00:09:43.677 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.677 "adrfam": "ipv4", 00:09:43.677 "trsvcid": "$NVMF_PORT", 00:09:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.677 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.677 "hdgst": ${hdgst:-false}, 00:09:43.677 "ddgst": ${ddgst:-false} 00:09:43.677 }, 00:09:43.677 "method": "bdev_nvme_attach_controller" 00:09:43.677 } 00:09:43.677 EOF 00:09:43.677 )") 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1501440 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:43.677 "params": { 00:09:43.677 "name": "Nvme1", 00:09:43.677 "trtype": "tcp", 00:09:43.677 "traddr": "10.0.0.2", 00:09:43.677 "adrfam": "ipv4", 00:09:43.677 "trsvcid": "4420", 00:09:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.677 "hdgst": false, 00:09:43.677 "ddgst": false 00:09:43.677 }, 00:09:43.677 "method": "bdev_nvme_attach_controller" 00:09:43.677 }' 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:43.677 "params": { 00:09:43.677 "name": "Nvme1", 00:09:43.677 "trtype": "tcp", 00:09:43.677 "traddr": "10.0.0.2", 00:09:43.677 "adrfam": "ipv4", 00:09:43.677 "trsvcid": "4420", 00:09:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.677 "hdgst": false, 00:09:43.677 "ddgst": false 00:09:43.677 }, 00:09:43.677 "method": "bdev_nvme_attach_controller" 00:09:43.677 }' 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:43.677 "params": { 00:09:43.677 "name": "Nvme1", 00:09:43.677 "trtype": "tcp", 00:09:43.677 "traddr": "10.0.0.2", 00:09:43.677 "adrfam": "ipv4", 00:09:43.677 "trsvcid": "4420", 00:09:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.677 "hdgst": false, 00:09:43.677 "ddgst": false 00:09:43.677 }, 00:09:43.677 "method": "bdev_nvme_attach_controller" 00:09:43.677 }' 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:09:43.677 01:20:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:43.677 "params": { 00:09:43.677 "name": "Nvme1", 00:09:43.677 "trtype": "tcp", 00:09:43.677 "traddr": "10.0.0.2", 00:09:43.677 "adrfam": "ipv4", 00:09:43.677 "trsvcid": "4420", 00:09:43.677 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.677 "hdgst": false, 00:09:43.677 "ddgst": false 00:09:43.677 }, 00:09:43.677 "method": "bdev_nvme_attach_controller" 00:09:43.677 }' 00:09:43.677 [2024-10-13 01:20:29.062061] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:43.677 [2024-10-13 01:20:29.062061] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:43.677 [2024-10-13 01:20:29.062146] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-13 01:20:29.062146] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:43.677 --proc-type=auto ] 00:09:43.677 [2024-10-13 01:20:29.062946] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:43.677 [2024-10-13 01:20:29.062952] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:43.677 [2024-10-13 01:20:29.063023] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-13 01:20:29.063023] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:43.677 --proc-type=auto ] 00:09:43.677 [2024-10-13 01:20:29.237975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.935 [2024-10-13 01:20:29.281153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:43.935 [2024-10-13 01:20:29.343515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.935 [2024-10-13 01:20:29.384806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:43.935 [2024-10-13 01:20:29.412686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.936 [2024-10-13 01:20:29.449378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:43.936 [2024-10-13 01:20:29.482213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.194 [2024-10-13 01:20:29.520483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:44.194 Running I/O for 1 seconds... 00:09:44.194 Running I/O for 1 seconds... 00:09:44.194 Running I/O for 1 seconds... 00:09:44.452 Running I/O for 1 seconds... 00:09:45.387 11131.00 IOPS, 43.48 MiB/s 00:09:45.387 Latency(us) 00:09:45.387 [2024-10-12T23:20:30.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.387 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:45.387 Nvme1n1 : 1.01 11194.46 43.73 0.00 0.00 11393.05 4951.61 18252.99 00:09:45.387 [2024-10-12T23:20:30.965Z] =================================================================================================================== 00:09:45.387 [2024-10-12T23:20:30.965Z] Total : 11194.46 43.73 0.00 0.00 11393.05 4951.61 18252.99 00:09:45.387 8146.00 IOPS, 31.82 MiB/s 00:09:45.387 Latency(us) 00:09:45.387 [2024-10-12T23:20:30.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.387 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:45.387 Nvme1n1 : 1.01 8191.85 32.00 0.00 0.00 15538.13 8883.77 24563.86 00:09:45.387 [2024-10-12T23:20:30.965Z] =================================================================================================================== 00:09:45.387 [2024-10-12T23:20:30.965Z] Total : 8191.85 32.00 0.00 0.00 15538.13 8883.77 24563.86 00:09:45.387 8592.00 IOPS, 33.56 MiB/s 00:09:45.387 Latency(us) 00:09:45.387 [2024-10-12T23:20:30.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.387 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:45.387 Nvme1n1 : 1.01 8660.66 33.83 0.00 0.00 14717.71 5825.42 25631.86 00:09:45.387 [2024-10-12T23:20:30.965Z] =================================================================================================================== 00:09:45.387 [2024-10-12T23:20:30.965Z] Total : 8660.66 33.83 0.00 0.00 14717.71 5825.42 25631.86 00:09:45.387 187848.00 IOPS, 733.78 MiB/s 00:09:45.387 Latency(us) 00:09:45.387 [2024-10-12T23:20:30.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.387 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:45.387 Nvme1n1 : 1.00 187496.94 732.41 0.00 0.00 679.05 285.20 1844.72 00:09:45.387 [2024-10-12T23:20:30.965Z] =================================================================================================================== 00:09:45.387 [2024-10-12T23:20:30.965Z] Total : 187496.94 732.41 0.00 0.00 679.05 285.20 1844.72 00:09:45.387 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1501442 00:09:45.387 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1501444 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1501447 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.645 01:20:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.645 rmmod nvme_tcp 00:09:45.645 rmmod nvme_fabrics 00:09:45.645 rmmod nvme_keyring 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1501414 ']' 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1501414 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1501414 ']' 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1501414 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1501414 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1501414' 00:09:45.645 killing process with pid 1501414 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1501414 00:09:45.645 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1501414 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.904 01:20:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.807 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:47.807 00:09:47.807 real 0m7.010s 00:09:47.807 user 0m15.413s 00:09:47.807 sys 0m3.605s 00:09:47.807 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.807 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.807 ************************************ 00:09:47.807 END TEST nvmf_bdev_io_wait 00:09:47.807 ************************************ 00:09:47.807 01:20:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:47.807 01:20:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.807 01:20:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.807 01:20:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.807 ************************************ 00:09:47.807 START TEST nvmf_queue_depth 00:09:47.807 ************************************ 00:09:47.807 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:48.066 * Looking for test storage... 00:09:48.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:48.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.066 --rc genhtml_branch_coverage=1 00:09:48.066 --rc genhtml_function_coverage=1 00:09:48.066 --rc genhtml_legend=1 00:09:48.066 --rc geninfo_all_blocks=1 00:09:48.066 --rc geninfo_unexecuted_blocks=1 00:09:48.066 00:09:48.066 ' 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:48.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.066 --rc genhtml_branch_coverage=1 00:09:48.066 --rc genhtml_function_coverage=1 00:09:48.066 --rc genhtml_legend=1 00:09:48.066 --rc geninfo_all_blocks=1 00:09:48.066 --rc geninfo_unexecuted_blocks=1 00:09:48.066 00:09:48.066 ' 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:48.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.066 --rc genhtml_branch_coverage=1 00:09:48.066 --rc genhtml_function_coverage=1 00:09:48.066 --rc genhtml_legend=1 00:09:48.066 --rc geninfo_all_blocks=1 00:09:48.066 --rc geninfo_unexecuted_blocks=1 00:09:48.066 00:09:48.066 ' 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:48.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.066 --rc genhtml_branch_coverage=1 00:09:48.066 --rc genhtml_function_coverage=1 00:09:48.066 --rc genhtml_legend=1 00:09:48.066 --rc geninfo_all_blocks=1 00:09:48.066 --rc geninfo_unexecuted_blocks=1 00:09:48.066 00:09:48.066 ' 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.066 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:48.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:48.067 01:20:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:49.970 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:49.970 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:49.970 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:49.970 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.970 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.971 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.971 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.971 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.971 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.971 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.971 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.971 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.971 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:09:50.229 00:09:50.229 --- 10.0.0.2 ping statistics --- 00:09:50.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.229 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:09:50.229 00:09:50.229 --- 10.0.0.1 ping statistics --- 00:09:50.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.229 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1503674 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1503674 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1503674 ']' 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.229 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.229 [2024-10-13 01:20:35.709781] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:50.229 [2024-10-13 01:20:35.709876] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.229 [2024-10-13 01:20:35.781891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.488 [2024-10-13 01:20:35.830486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.488 [2024-10-13 01:20:35.830548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.488 [2024-10-13 01:20:35.830565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.488 [2024-10-13 01:20:35.830578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.488 [2024-10-13 01:20:35.830589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.488 [2024-10-13 01:20:35.831213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.488 [2024-10-13 01:20:35.977748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.488 01:20:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.488 Malloc0 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.488 [2024-10-13 01:20:36.027445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1503700 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1503700 /var/tmp/bdevperf.sock 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1503700 ']' 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:50.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.488 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.745 [2024-10-13 01:20:36.078392] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:09:50.745 [2024-10-13 01:20:36.078467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503700 ] 00:09:50.746 [2024-10-13 01:20:36.141551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.746 [2024-10-13 01:20:36.191350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.746 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:50.746 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:50.746 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:50.746 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.746 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:51.003 NVMe0n1 00:09:51.003 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.003 01:20:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:51.261 Running I/O for 10 seconds... 00:09:53.128 8181.00 IOPS, 31.96 MiB/s [2024-10-12T23:20:40.081Z] 8192.00 IOPS, 32.00 MiB/s [2024-10-12T23:20:41.013Z] 8334.00 IOPS, 32.55 MiB/s [2024-10-12T23:20:41.947Z] 8400.50 IOPS, 32.81 MiB/s [2024-10-12T23:20:42.881Z] 8392.80 IOPS, 32.78 MiB/s [2024-10-12T23:20:43.815Z] 8437.50 IOPS, 32.96 MiB/s [2024-10-12T23:20:44.748Z] 8455.14 IOPS, 33.03 MiB/s [2024-10-12T23:20:45.682Z] 8444.00 IOPS, 32.98 MiB/s [2024-10-12T23:20:47.055Z] 8483.67 IOPS, 33.14 MiB/s [2024-10-12T23:20:47.055Z] 8490.80 IOPS, 33.17 MiB/s 00:10:01.477 Latency(us) 00:10:01.477 [2024-10-12T23:20:47.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.477 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:01.477 Verification LBA range: start 0x0 length 0x4000 00:10:01.477 NVMe0n1 : 10.09 8514.87 33.26 0.00 0.00 119755.63 23981.32 71846.87 00:10:01.477 [2024-10-12T23:20:47.055Z] =================================================================================================================== 00:10:01.477 [2024-10-12T23:20:47.055Z] Total : 8514.87 33.26 0.00 0.00 119755.63 23981.32 71846.87 00:10:01.477 { 00:10:01.477 "results": [ 00:10:01.477 { 00:10:01.477 "job": "NVMe0n1", 00:10:01.477 "core_mask": "0x1", 00:10:01.477 "workload": "verify", 00:10:01.477 "status": "finished", 00:10:01.477 "verify_range": { 00:10:01.477 "start": 0, 00:10:01.477 "length": 16384 00:10:01.477 }, 00:10:01.477 "queue_depth": 1024, 00:10:01.477 "io_size": 4096, 00:10:01.477 "runtime": 10.092581, 00:10:01.477 "iops": 8514.868495977391, 00:10:01.477 "mibps": 33.26120506241168, 00:10:01.477 "io_failed": 0, 00:10:01.477 "io_timeout": 0, 00:10:01.477 "avg_latency_us": 119755.62570720412, 00:10:01.477 "min_latency_us": 23981.321481481482, 00:10:01.477 "max_latency_us": 71846.87407407408 00:10:01.477 } 00:10:01.477 ], 00:10:01.477 "core_count": 1 00:10:01.477 } 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1503700 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1503700 ']' 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1503700 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503700 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503700' 00:10:01.477 killing process with pid 1503700 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1503700 00:10:01.477 Received shutdown signal, test time was about 10.000000 seconds 00:10:01.477 00:10:01.477 Latency(us) 00:10:01.477 [2024-10-12T23:20:47.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.477 [2024-10-12T23:20:47.055Z] =================================================================================================================== 00:10:01.477 [2024-10-12T23:20:47.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:01.477 01:20:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1503700 00:10:01.477 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:01.477 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:01.477 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:01.477 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:01.477 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.477 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:01.477 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.477 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.477 rmmod nvme_tcp 00:10:01.477 rmmod nvme_fabrics 00:10:01.735 rmmod nvme_keyring 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1503674 ']' 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1503674 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1503674 ']' 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1503674 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503674 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503674' 00:10:01.735 killing process with pid 1503674 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1503674 00:10:01.735 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1503674 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.993 01:20:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.897 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:03.897 00:10:03.897 real 0m16.024s 00:10:03.897 user 0m22.683s 00:10:03.897 sys 0m3.000s 00:10:03.897 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.897 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:03.897 ************************************ 00:10:03.897 END TEST nvmf_queue_depth 00:10:03.897 ************************************ 00:10:03.897 01:20:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:03.897 01:20:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:03.897 01:20:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.897 01:20:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:03.897 ************************************ 00:10:03.897 START TEST nvmf_target_multipath 00:10:03.897 ************************************ 00:10:03.897 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:04.156 * Looking for test storage... 00:10:04.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.156 --rc genhtml_branch_coverage=1 00:10:04.156 --rc genhtml_function_coverage=1 00:10:04.156 --rc genhtml_legend=1 00:10:04.156 --rc geninfo_all_blocks=1 00:10:04.156 --rc geninfo_unexecuted_blocks=1 00:10:04.156 00:10:04.156 ' 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.156 --rc genhtml_branch_coverage=1 00:10:04.156 --rc genhtml_function_coverage=1 00:10:04.156 --rc genhtml_legend=1 00:10:04.156 --rc geninfo_all_blocks=1 00:10:04.156 --rc geninfo_unexecuted_blocks=1 00:10:04.156 00:10:04.156 ' 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.156 --rc genhtml_branch_coverage=1 00:10:04.156 --rc genhtml_function_coverage=1 00:10:04.156 --rc genhtml_legend=1 00:10:04.156 --rc geninfo_all_blocks=1 00:10:04.156 --rc geninfo_unexecuted_blocks=1 00:10:04.156 00:10:04.156 ' 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.156 --rc genhtml_branch_coverage=1 00:10:04.156 --rc genhtml_function_coverage=1 00:10:04.156 --rc genhtml_legend=1 00:10:04.156 --rc geninfo_all_blocks=1 00:10:04.156 --rc geninfo_unexecuted_blocks=1 00:10:04.156 00:10:04.156 ' 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.156 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.157 01:20:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:06.688 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.688 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.688 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.688 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.688 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:06.689 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:06.689 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:06.689 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:06.689 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.689 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:10:06.690 00:10:06.690 --- 10.0.0.2 ping statistics --- 00:10:06.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.690 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:10:06.690 00:10:06.690 --- 10.0.0.1 ping statistics --- 00:10:06.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.690 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:06.690 only one NIC for nvmf test 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.690 rmmod nvme_tcp 00:10:06.690 rmmod nvme_fabrics 00:10:06.690 rmmod nvme_keyring 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.690 01:20:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.648 00:10:08.648 real 0m4.490s 00:10:08.648 user 0m0.963s 00:10:08.648 sys 0m1.545s 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:08.648 ************************************ 00:10:08.648 END TEST nvmf_target_multipath 00:10:08.648 ************************************ 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.648 ************************************ 00:10:08.648 START TEST nvmf_zcopy 00:10:08.648 ************************************ 00:10:08.648 01:20:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:08.648 * Looking for test storage... 00:10:08.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:08.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.648 --rc genhtml_branch_coverage=1 00:10:08.648 --rc genhtml_function_coverage=1 00:10:08.648 --rc genhtml_legend=1 00:10:08.648 --rc geninfo_all_blocks=1 00:10:08.648 --rc geninfo_unexecuted_blocks=1 00:10:08.648 00:10:08.648 ' 00:10:08.648 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:08.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.648 --rc genhtml_branch_coverage=1 00:10:08.648 --rc genhtml_function_coverage=1 00:10:08.648 --rc genhtml_legend=1 00:10:08.648 --rc geninfo_all_blocks=1 00:10:08.649 --rc geninfo_unexecuted_blocks=1 00:10:08.649 00:10:08.649 ' 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.649 --rc genhtml_branch_coverage=1 00:10:08.649 --rc genhtml_function_coverage=1 00:10:08.649 --rc genhtml_legend=1 00:10:08.649 --rc geninfo_all_blocks=1 00:10:08.649 --rc geninfo_unexecuted_blocks=1 00:10:08.649 00:10:08.649 ' 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:08.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.649 --rc genhtml_branch_coverage=1 00:10:08.649 --rc genhtml_function_coverage=1 00:10:08.649 --rc genhtml_legend=1 00:10:08.649 --rc geninfo_all_blocks=1 00:10:08.649 --rc geninfo_unexecuted_blocks=1 00:10:08.649 00:10:08.649 ' 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.649 01:20:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.180 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:11.181 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:11.181 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:11.181 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:11.181 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:10:11.181 00:10:11.181 --- 10.0.0.2 ping statistics --- 00:10:11.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.181 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:10:11.181 00:10:11.181 --- 10.0.0.1 ping statistics --- 00:10:11.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.181 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:11.181 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1508906 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1508906 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1508906 ']' 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 [2024-10-13 01:20:56.394194] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:10:11.182 [2024-10-13 01:20:56.394294] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.182 [2024-10-13 01:20:56.462980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.182 [2024-10-13 01:20:56.509716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.182 [2024-10-13 01:20:56.509798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.182 [2024-10-13 01:20:56.509811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.182 [2024-10-13 01:20:56.509837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.182 [2024-10-13 01:20:56.509846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.182 [2024-10-13 01:20:56.510450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 [2024-10-13 01:20:56.657839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 [2024-10-13 01:20:56.674078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 malloc0 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:11.182 { 00:10:11.182 "params": { 00:10:11.182 "name": "Nvme$subsystem", 00:10:11.182 "trtype": "$TEST_TRANSPORT", 00:10:11.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.182 "adrfam": "ipv4", 00:10:11.182 "trsvcid": "$NVMF_PORT", 00:10:11.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.182 "hdgst": ${hdgst:-false}, 00:10:11.182 "ddgst": ${ddgst:-false} 00:10:11.182 }, 00:10:11.182 "method": "bdev_nvme_attach_controller" 00:10:11.182 } 00:10:11.182 EOF 00:10:11.182 )") 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:11.182 01:20:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:11.182 "params": { 00:10:11.182 "name": "Nvme1", 00:10:11.182 "trtype": "tcp", 00:10:11.182 "traddr": "10.0.0.2", 00:10:11.182 "adrfam": "ipv4", 00:10:11.182 "trsvcid": "4420", 00:10:11.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.182 "hdgst": false, 00:10:11.182 "ddgst": false 00:10:11.182 }, 00:10:11.182 "method": "bdev_nvme_attach_controller" 00:10:11.182 }' 00:10:11.440 [2024-10-13 01:20:56.765095] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:10:11.440 [2024-10-13 01:20:56.765184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509051 ] 00:10:11.440 [2024-10-13 01:20:56.833171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.440 [2024-10-13 01:20:56.884274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.698 Running I/O for 10 seconds... 00:10:13.563 5447.00 IOPS, 42.55 MiB/s [2024-10-12T23:21:00.515Z] 5518.50 IOPS, 43.11 MiB/s [2024-10-12T23:21:01.447Z] 5567.67 IOPS, 43.50 MiB/s [2024-10-12T23:21:02.381Z] 5562.50 IOPS, 43.46 MiB/s [2024-10-12T23:21:03.316Z] 5560.40 IOPS, 43.44 MiB/s [2024-10-12T23:21:04.249Z] 5567.50 IOPS, 43.50 MiB/s [2024-10-12T23:21:05.183Z] 5573.43 IOPS, 43.54 MiB/s [2024-10-12T23:21:06.554Z] 5586.50 IOPS, 43.64 MiB/s [2024-10-12T23:21:07.489Z] 5588.56 IOPS, 43.66 MiB/s [2024-10-12T23:21:07.489Z] 5579.40 IOPS, 43.59 MiB/s 00:10:21.911 Latency(us) 00:10:21.911 [2024-10-12T23:21:07.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.911 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:21.911 Verification LBA range: start 0x0 length 0x1000 00:10:21.911 Nvme1n1 : 10.01 5583.38 43.62 0.00 0.00 22863.94 3325.35 31457.28 00:10:21.911 [2024-10-12T23:21:07.489Z] =================================================================================================================== 00:10:21.911 [2024-10-12T23:21:07.489Z] Total : 5583.38 43.62 0.00 0.00 22863.94 3325.35 31457.28 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1510434 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:21.911 { 00:10:21.911 "params": { 00:10:21.911 "name": "Nvme$subsystem", 00:10:21.911 "trtype": "$TEST_TRANSPORT", 00:10:21.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.911 "adrfam": "ipv4", 00:10:21.911 "trsvcid": "$NVMF_PORT", 00:10:21.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.911 "hdgst": ${hdgst:-false}, 00:10:21.911 "ddgst": ${ddgst:-false} 00:10:21.911 }, 00:10:21.911 "method": "bdev_nvme_attach_controller" 00:10:21.911 } 00:10:21.911 EOF 00:10:21.911 )") 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:10:21.911 [2024-10-13 01:21:07.348905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.348947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:10:21.911 01:21:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:21.911 "params": { 00:10:21.911 "name": "Nvme1", 00:10:21.911 "trtype": "tcp", 00:10:21.911 "traddr": "10.0.0.2", 00:10:21.911 "adrfam": "ipv4", 00:10:21.911 "trsvcid": "4420", 00:10:21.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.911 "hdgst": false, 00:10:21.911 "ddgst": false 00:10:21.911 }, 00:10:21.911 "method": "bdev_nvme_attach_controller" 00:10:21.911 }' 00:10:21.911 [2024-10-13 01:21:07.356863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.356887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.364882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.364903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.372901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.372921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.380922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.380943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.388933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.388963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.390629] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:10:21.911 [2024-10-13 01:21:07.390712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510434 ] 00:10:21.911 [2024-10-13 01:21:07.396955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.396976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.404977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.404998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.412998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.413028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.421047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.421072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.429061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.429087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.437084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.437109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.445106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.445131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.453130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.453156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.458402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.911 [2024-10-13 01:21:07.461152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.461178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.469205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.469246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.477208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.477240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.911 [2024-10-13 01:21:07.485217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.911 [2024-10-13 01:21:07.485242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.493238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.493263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.501260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.501284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.509282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.509307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.509527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.170 [2024-10-13 01:21:07.517303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.517328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.525339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.525370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.533372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.533410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.541392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.541433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.549413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.549451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.557438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.557486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.565463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.565522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.573494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.573545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.581491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.581530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.589545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.589578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.597569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.597602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.605590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.605634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.613582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.613604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.621599] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.621621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.629621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.629647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.637661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.637686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.645665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.645690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.653685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.653709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.661706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.661728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.669729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.669750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.677766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.677787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.685773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.685794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.693858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.693882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.701850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.701878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.709870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.709892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.717893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.717914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.725915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.725936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.733938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.733958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.170 [2024-10-13 01:21:07.741959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.170 [2024-10-13 01:21:07.741979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.428 [2024-10-13 01:21:07.749976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.428 [2024-10-13 01:21:07.750004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.428 [2024-10-13 01:21:07.758005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.758026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.766017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.766042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.774039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.774063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.782063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.782088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.790132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.790156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.798110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.798136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.806133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.806159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.814157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.814183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.822177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.822202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.830199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.830224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.838223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.838255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.846244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.846269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.887669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.887698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.894387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.894415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 Running I/O for 5 seconds... 00:10:22.429 [2024-10-13 01:21:07.902404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.902429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.917018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.917050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.929128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.929160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.941031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.941063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.953063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.953106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.964937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.964969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.976887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.976918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:07.988464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:07.988521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.429 [2024-10-13 01:21:08.000168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.429 [2024-10-13 01:21:08.000199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.686 [2024-10-13 01:21:08.012467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.686 [2024-10-13 01:21:08.012530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.686 [2024-10-13 01:21:08.024114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.686 [2024-10-13 01:21:08.024146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.686 [2024-10-13 01:21:08.036091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.686 [2024-10-13 01:21:08.036122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.686 [2024-10-13 01:21:08.048033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.686 [2024-10-13 01:21:08.048064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.686 [2024-10-13 01:21:08.059895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.686 [2024-10-13 01:21:08.059926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.071593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.071621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.083353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.083396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.095222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.095252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.106973] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.107003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.118746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.118792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.130222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.130254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.141296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.141327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.152911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.152944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.165101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.165131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.178681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.178709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.189683] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.189710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.201378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.201408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.214890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.214921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.225859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.225890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.237725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.237768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.251166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.251196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.687 [2024-10-13 01:21:08.262060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.687 [2024-10-13 01:21:08.262091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.273423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.273454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.284548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.284577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.296559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.296587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.308272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.308315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.321610] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.321638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.331900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.331931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.344292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.344324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.355921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.355951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.368035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.368066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.379624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.379652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.392953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.392984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.404211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.404241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.415989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.416020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.427037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.427069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.438228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.438256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.448700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.448728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.459924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.459952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.470677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.470705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.481723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.481751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.494534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.494562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.504884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.504913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.945 [2024-10-13 01:21:08.515498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.945 [2024-10-13 01:21:08.515526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.526540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.526567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.537294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.537321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.549869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.549897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.560466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.560502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.571403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.571431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.582578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.582616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.593152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.593180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.605688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.605717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.615822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.615849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.626643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.626671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.639668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.639696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.649922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.649950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.660678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.660706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.672320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.672352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.684145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.684176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.696026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.696056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.708039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.708070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.719806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.719838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.731619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.731649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.745207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.745238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.756530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.756558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.204 [2024-10-13 01:21:08.768057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.204 [2024-10-13 01:21:08.768088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.205 [2024-10-13 01:21:08.779573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.205 [2024-10-13 01:21:08.779605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.791165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.791197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.803126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.803157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.815046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.815077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.827342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.827372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.839690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.839718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.851554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.851584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.863431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.863463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.875542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.875571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.886970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.887017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.898542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.898585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 10964.00 IOPS, 85.66 MiB/s [2024-10-12T23:21:09.041Z] [2024-10-13 01:21:08.909951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.909981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.923142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.923174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.934014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.934046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.945639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.945667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.957109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.957151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.968696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.968740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.982101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.982133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:08.993067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:08.993099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:09.004681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:09.004711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:09.016139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:09.016170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:09.027866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:09.027898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.463 [2024-10-13 01:21:09.039371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.463 [2024-10-13 01:21:09.039399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.051186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.051217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.062955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.062986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.074418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.074449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.085693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.085732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.097584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.097612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.109398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.109429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.122972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.123003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.134488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.134531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.146024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.146055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.157706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.157734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.168822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.168852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.180194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.180235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.191902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.191932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.203694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.203722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.215011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.215041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.226841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.226872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.238551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.238578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.250661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.250689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.262262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.262292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.274281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.274313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.286115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.286146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.722 [2024-10-13 01:21:09.298047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.722 [2024-10-13 01:21:09.298079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.309684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.309711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.321780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.321812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.335309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.335341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.346376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.346407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.357885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.357916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.369356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.369388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.380575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.380603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.391705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.391733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.404951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.404990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.415333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.415364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.427251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.427282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.438626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.438654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.450226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.450257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.462070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.462101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.473667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.473698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.485066] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.485097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.496728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.496756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.507866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.507897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.519230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.519260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.530839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.530870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.542489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.542536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.980 [2024-10-13 01:21:09.554162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.980 [2024-10-13 01:21:09.554192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.565817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.565848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.577261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.577292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.588810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.588853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.600388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.600418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.612409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.612439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.624273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.624313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.635791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.635822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.647226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.647256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.658644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.658672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.669823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.669851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.682711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.682740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.693589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.693616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.704846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.704877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.717007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.717038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.728690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.728718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.740305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.740336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.752165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.752196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.763822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.763852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.775758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.775803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.787330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.238 [2024-10-13 01:21:09.787361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.238 [2024-10-13 01:21:09.799416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.239 [2024-10-13 01:21:09.799447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.239 [2024-10-13 01:21:09.811980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.239 [2024-10-13 01:21:09.812009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.823524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.823552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.834587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.834615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.846255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.846294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.858011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.858041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.869748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.869797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.881332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.881363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.892990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.893020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.904550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.904578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 10951.00 IOPS, 85.55 MiB/s [2024-10-12T23:21:10.075Z] [2024-10-13 01:21:09.916169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.916200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.928124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.928154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.940220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.940250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.951992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.952023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.963697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.963724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.975653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.975680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.987163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.987194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.497 [2024-10-13 01:21:09.998478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.497 [2024-10-13 01:21:09.998523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.498 [2024-10-13 01:21:10.009643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.498 [2024-10-13 01:21:10.009676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.498 [2024-10-13 01:21:10.021738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.498 [2024-10-13 01:21:10.021774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.498 [2024-10-13 01:21:10.033578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.498 [2024-10-13 01:21:10.033609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.498 [2024-10-13 01:21:10.045584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.498 [2024-10-13 01:21:10.045614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.498 [2024-10-13 01:21:10.057864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.498 [2024-10-13 01:21:10.057896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.498 [2024-10-13 01:21:10.069875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.498 [2024-10-13 01:21:10.069906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.756 [2024-10-13 01:21:10.081693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.756 [2024-10-13 01:21:10.081721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.756 [2024-10-13 01:21:10.093437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.756 [2024-10-13 01:21:10.093467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.756 [2024-10-13 01:21:10.106876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.756 [2024-10-13 01:21:10.106907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.756 [2024-10-13 01:21:10.117278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.756 [2024-10-13 01:21:10.117309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.756 [2024-10-13 01:21:10.129612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.756 [2024-10-13 01:21:10.129640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.756 [2024-10-13 01:21:10.141468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.141509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.153334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.153365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.164890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.164921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.177070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.177102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.189266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.189298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.200953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.200985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.212811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.212842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.225144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.225174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.237426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.237458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.249179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.249210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.263019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.263051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.274525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.274569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.286056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.286087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.297612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.297639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.308809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.308841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.320233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.320265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.757 [2024-10-13 01:21:10.331854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.757 [2024-10-13 01:21:10.331885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.343828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.343859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.355810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.355840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.367533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.367560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.379514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.379558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.391631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.391658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.403706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.403734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.415587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.415615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.427303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.427333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.439114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.439144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.450963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.450994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.463159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.463189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.474814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.474846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.486775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.486821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.498374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.498404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.509449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.509498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.521568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.521596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.533589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.533621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.545448] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.545489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.557714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.557741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.569372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.569402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.581399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.581429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.015 [2024-10-13 01:21:10.593407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.015 [2024-10-13 01:21:10.593439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.605409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.605440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.619213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.619244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.630626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.630654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.641843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.641874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.653461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.653503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.665635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.665662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.677460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.677501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.689142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.689172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.702563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.702590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.713533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.713561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.725302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.725332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.737074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.737114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.748730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.748757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.760841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.760872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.772699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.772727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.784700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.784727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.796466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.796506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.808147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.808178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.819805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.819837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.831284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.831316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.274 [2024-10-13 01:21:10.843220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.274 [2024-10-13 01:21:10.843250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.856732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.856760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.867537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.867565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.879677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.879706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.891312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.891355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.903611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.903639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 10902.00 IOPS, 85.17 MiB/s [2024-10-12T23:21:11.111Z] [2024-10-13 01:21:10.915269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.915300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.927112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.927143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.938383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.938412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.950952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.950980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.961235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.961272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.972104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.972132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.982956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.982984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:10.993588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:10.993616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.006046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.006075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.016694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.016721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.027810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.027837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.040574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.040602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.051112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.051140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.062132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.062160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.075108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.075136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.085194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.085222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.095597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.095625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.533 [2024-10-13 01:21:11.106201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.533 [2024-10-13 01:21:11.106229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.116846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.116874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.128605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.128633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.140267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.140298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.151906] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.151937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.163356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.163384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.175186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.175217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.187092] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.187124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.199082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.199113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.210361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.210391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.222246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.222277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.233857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.233900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.245714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.245741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.257584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.257613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.269759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.269807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.281663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.791 [2024-10-13 01:21:11.281692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.791 [2024-10-13 01:21:11.293458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.792 [2024-10-13 01:21:11.293501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.792 [2024-10-13 01:21:11.305046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.792 [2024-10-13 01:21:11.305077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.792 [2024-10-13 01:21:11.317282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.792 [2024-10-13 01:21:11.317313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.792 [2024-10-13 01:21:11.329021] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.792 [2024-10-13 01:21:11.329052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.792 [2024-10-13 01:21:11.340710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.792 [2024-10-13 01:21:11.340739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.792 [2024-10-13 01:21:11.353924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.792 [2024-10-13 01:21:11.353955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.792 [2024-10-13 01:21:11.364707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.792 [2024-10-13 01:21:11.364735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.376681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.376710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.388386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.388417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.400058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.400090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.412210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.412241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.423810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.423841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.435121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.435151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.446968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.446998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.458552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.458579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.470050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.470082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.481160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.481190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.492639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.492667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.504520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.504547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.516070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.516101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.527586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.527614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.539319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.539350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.550462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.550504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.562184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.562214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.574076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.574107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.585592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.585619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.597342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.597373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.608851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.608882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.050 [2024-10-13 01:21:11.620855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.050 [2024-10-13 01:21:11.620887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.634807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.634838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.645757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.645804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.657295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.657327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.671229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.671260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.682141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.682173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.694075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.694106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.705681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.705712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.717423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.717453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.729347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.729377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.741111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.741142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.752961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.752992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.764605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.764633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.775842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.775874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.787734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.787778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.799178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.799209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.810839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.810870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.822393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.822424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.833975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.834006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.845722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.845766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.857287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.857319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.868915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.868946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.309 [2024-10-13 01:21:11.880443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.309 [2024-10-13 01:21:11.880484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:11.892314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.892345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:11.903615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.903643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 10941.50 IOPS, 85.48 MiB/s [2024-10-12T23:21:12.145Z] [2024-10-13 01:21:11.915552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.915580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:11.927702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.927730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:11.939341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.939371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:11.950569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.950604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:11.961980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.962011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:11.973741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.973769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:11.984881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.984912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:11.998318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:11.998349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.009124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.009155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.020872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.020903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.032884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.032914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.044836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.044867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.056741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.056776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.068355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.068386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.079876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.079907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.091947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.091977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.103298] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.103329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.114885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.114916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.126047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.126075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.567 [2024-10-13 01:21:12.137507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.567 [2024-10-13 01:21:12.137543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.150368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.150395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.161290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.161317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.172516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.172544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.183431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.183459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.194129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.194156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.206971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.206999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.216956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.216984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.228309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.228336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.240827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.240854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.250761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.250790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.261460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.261496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.273975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.274009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.283829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.283856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.294554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.294581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.307454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.307491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.318028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.318056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.328618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.328646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.339489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.339516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.349951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.349980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.360785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.360813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.371789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.371816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.385485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.385525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.826 [2024-10-13 01:21:12.396452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.826 [2024-10-13 01:21:12.396493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.407307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.407339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.419067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.419099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.430268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.430299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.441986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.442017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.453965] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.453996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.465818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.465849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.477655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.477684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.489961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.490004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.501657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.501685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.513727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.513755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.525338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.525370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.536905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.536938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.548389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.548420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.560167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.560197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.084 [2024-10-13 01:21:12.571649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.084 [2024-10-13 01:21:12.571676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.085 [2024-10-13 01:21:12.583011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.085 [2024-10-13 01:21:12.583042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.085 [2024-10-13 01:21:12.594658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.085 [2024-10-13 01:21:12.594685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.085 [2024-10-13 01:21:12.606345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.085 [2024-10-13 01:21:12.606376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.085 [2024-10-13 01:21:12.617997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.085 [2024-10-13 01:21:12.618027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.085 [2024-10-13 01:21:12.629664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.085 [2024-10-13 01:21:12.629692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.085 [2024-10-13 01:21:12.641447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.085 [2024-10-13 01:21:12.641486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.085 [2024-10-13 01:21:12.653253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.085 [2024-10-13 01:21:12.653284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.665258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.665289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.676846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.676877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.688391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.688421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.700097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.700127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.712388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.712419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.723932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.723963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.734949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.734980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.746454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.746493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.757822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.757853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.771057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.771088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.782246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.782277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.793695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.793724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.805929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.805961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.817062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.817108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.828887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.828918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.841014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.841044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.852994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.853026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.864699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.864728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.886135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.886167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.897652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.897680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 [2024-10-13 01:21:12.909983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.910015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.343 10966.80 IOPS, 85.68 MiB/s [2024-10-12T23:21:12.921Z] [2024-10-13 01:21:12.921330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.343 [2024-10-13 01:21:12.921358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.601 00:10:27.601 Latency(us) 00:10:27.601 [2024-10-12T23:21:13.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.601 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:27.601 Nvme1n1 : 5.01 10971.18 85.71 0.00 0.00 11650.71 4951.61 22330.79 00:10:27.601 [2024-10-12T23:21:13.179Z] =================================================================================================================== 00:10:27.602 [2024-10-12T23:21:13.180Z] Total : 10971.18 85.71 0.00 0.00 11650.71 4951.61 22330.79 00:10:27.602 [2024-10-13 01:21:12.928584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:12.928610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:12.936607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:12.936633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:12.944646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:12.944679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:12.952695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:12.952741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:12.960711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:12.960755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:12.968767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:12.968816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:12.976759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:12.976806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:12.984778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:12.984823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:12.992799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:12.992847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.000828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.000875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.008842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.008889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.016871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.016917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.024891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.024938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.032909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.032959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.040931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.040977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.048954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.049001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.056984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.057047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.064990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.065034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.072978] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.073004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.081023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.081062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.089061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.089112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.097087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.097135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.105069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.105094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.113090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.113116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 [2024-10-13 01:21:13.121111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.602 [2024-10-13 01:21:13.121135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1510434) - No such process 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1510434 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.602 delay0 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.602 01:21:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:27.860 [2024-10-13 01:21:13.206595] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:35.967 Initializing NVMe Controllers 00:10:35.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:35.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:35.967 Initialization complete. Launching workers. 00:10:35.967 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 316, failed: 5138 00:10:35.967 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5421, failed to submit 33 00:10:35.967 success 5254, unsuccessful 167, failed 0 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.967 rmmod nvme_tcp 00:10:35.967 rmmod nvme_fabrics 00:10:35.967 rmmod nvme_keyring 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1508906 ']' 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1508906 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1508906 ']' 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1508906 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1508906 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1508906' 00:10:35.967 killing process with pid 1508906 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1508906 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1508906 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.967 01:21:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.342 00:10:37.342 real 0m28.679s 00:10:37.342 user 0m42.135s 00:10:37.342 sys 0m8.742s 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:37.342 ************************************ 00:10:37.342 END TEST nvmf_zcopy 00:10:37.342 ************************************ 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.342 ************************************ 00:10:37.342 START TEST nvmf_nmic 00:10:37.342 ************************************ 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:37.342 * Looking for test storage... 00:10:37.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.342 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:37.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.343 --rc genhtml_branch_coverage=1 00:10:37.343 --rc genhtml_function_coverage=1 00:10:37.343 --rc genhtml_legend=1 00:10:37.343 --rc geninfo_all_blocks=1 00:10:37.343 --rc geninfo_unexecuted_blocks=1 00:10:37.343 00:10:37.343 ' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:37.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.343 --rc genhtml_branch_coverage=1 00:10:37.343 --rc genhtml_function_coverage=1 00:10:37.343 --rc genhtml_legend=1 00:10:37.343 --rc geninfo_all_blocks=1 00:10:37.343 --rc geninfo_unexecuted_blocks=1 00:10:37.343 00:10:37.343 ' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:37.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.343 --rc genhtml_branch_coverage=1 00:10:37.343 --rc genhtml_function_coverage=1 00:10:37.343 --rc genhtml_legend=1 00:10:37.343 --rc geninfo_all_blocks=1 00:10:37.343 --rc geninfo_unexecuted_blocks=1 00:10:37.343 00:10:37.343 ' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:37.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.343 --rc genhtml_branch_coverage=1 00:10:37.343 --rc genhtml_function_coverage=1 00:10:37.343 --rc genhtml_legend=1 00:10:37.343 --rc geninfo_all_blocks=1 00:10:37.343 --rc geninfo_unexecuted_blocks=1 00:10:37.343 00:10:37.343 ' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.343 01:21:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:39.874 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:39.874 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:39.874 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:39.874 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.874 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:10:39.875 00:10:39.875 --- 10.0.0.2 ping statistics --- 00:10:39.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.875 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:10:39.875 00:10:39.875 --- 10.0.0.1 ping statistics --- 00:10:39.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.875 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1514395 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1514395 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1514395 ']' 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.875 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.875 [2024-10-13 01:21:25.261010] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:10:39.875 [2024-10-13 01:21:25.261111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.875 [2024-10-13 01:21:25.325581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.875 [2024-10-13 01:21:25.376112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.875 [2024-10-13 01:21:25.376167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.875 [2024-10-13 01:21:25.376194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.875 [2024-10-13 01:21:25.376206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.875 [2024-10-13 01:21:25.376216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.875 [2024-10-13 01:21:25.377957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.875 [2024-10-13 01:21:25.378023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.875 [2024-10-13 01:21:25.378089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.875 [2024-10-13 01:21:25.378092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 [2024-10-13 01:21:25.531019] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 Malloc0 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 [2024-10-13 01:21:25.602251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:40.141 test case1: single bdev can't be used in multiple subsystems 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 [2024-10-13 01:21:25.626060] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:40.141 [2024-10-13 01:21:25.626089] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:40.141 [2024-10-13 01:21:25.626121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.141 request: 00:10:40.141 { 00:10:40.141 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:40.141 "namespace": { 00:10:40.141 "bdev_name": "Malloc0", 00:10:40.141 "no_auto_visible": false 00:10:40.141 }, 00:10:40.141 "method": "nvmf_subsystem_add_ns", 00:10:40.141 "req_id": 1 00:10:40.141 } 00:10:40.141 Got JSON-RPC error response 00:10:40.141 response: 00:10:40.141 { 00:10:40.141 "code": -32602, 00:10:40.141 "message": "Invalid parameters" 00:10:40.141 } 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:40.141 Adding namespace failed - expected result. 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:40.141 test case2: host connect to nvmf target in multiple paths 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:40.141 [2024-10-13 01:21:25.634172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:40.141 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.142 01:21:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.769 01:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:41.335 01:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.335 01:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.335 01:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.335 01:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:41.335 01:21:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.858 01:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.858 01:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.858 01:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.858 01:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:43.858 01:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.858 01:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:43.858 01:21:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:43.858 [global] 00:10:43.858 thread=1 00:10:43.858 invalidate=1 00:10:43.858 rw=write 00:10:43.858 time_based=1 00:10:43.858 runtime=1 00:10:43.858 ioengine=libaio 00:10:43.858 direct=1 00:10:43.858 bs=4096 00:10:43.858 iodepth=1 00:10:43.858 norandommap=0 00:10:43.858 numjobs=1 00:10:43.858 00:10:43.858 verify_dump=1 00:10:43.858 verify_backlog=512 00:10:43.858 verify_state_save=0 00:10:43.858 do_verify=1 00:10:43.858 verify=crc32c-intel 00:10:43.858 [job0] 00:10:43.858 filename=/dev/nvme0n1 00:10:43.858 Could not set queue depth (nvme0n1) 00:10:43.858 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.858 fio-3.35 00:10:43.858 Starting 1 thread 00:10:44.790 00:10:44.790 job0: (groupid=0, jobs=1): err= 0: pid=1514911: Sun Oct 13 01:21:30 2024 00:10:44.790 read: IOPS=1994, BW=7976KiB/s (8167kB/s)(7984KiB/1001msec) 00:10:44.790 slat (nsec): min=6694, max=70064, avg=12973.47, stdev=5462.75 00:10:44.790 clat (usec): min=173, max=657, avg=252.91, stdev=40.59 00:10:44.790 lat (usec): min=182, max=673, avg=265.88, stdev=41.93 00:10:44.790 clat percentiles (usec): 00:10:44.790 | 1.00th=[ 206], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:10:44.790 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 249], 00:10:44.790 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 306], 00:10:44.790 | 99.00th=[ 469], 99.50th=[ 515], 99.90th=[ 627], 99.95th=[ 660], 00:10:44.790 | 99.99th=[ 660] 00:10:44.790 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:44.790 slat (usec): min=8, max=27879, avg=28.83, stdev=615.76 00:10:44.790 clat (usec): min=125, max=1178, avg=192.40, stdev=48.97 00:10:44.790 lat (usec): min=134, max=28107, avg=221.24, stdev=618.57 00:10:44.790 clat percentiles (usec): 00:10:44.790 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 153], 00:10:44.790 | 30.00th=[ 163], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 188], 00:10:44.790 | 70.00th=[ 210], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 277], 00:10:44.790 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 388], 00:10:44.790 | 99.99th=[ 1172] 00:10:44.790 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:44.790 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:44.790 lat (usec) : 250=76.24%, 500=23.42%, 750=0.32% 00:10:44.790 lat (msec) : 2=0.02% 00:10:44.790 cpu : usr=4.80%, sys=7.20%, ctx=4047, majf=0, minf=1 00:10:44.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.790 issued rwts: total=1996,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.790 00:10:44.790 Run status group 0 (all jobs): 00:10:44.790 READ: bw=7976KiB/s (8167kB/s), 7976KiB/s-7976KiB/s (8167kB/s-8167kB/s), io=7984KiB (8176kB), run=1001-1001msec 00:10:44.790 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:44.790 00:10:44.790 Disk stats (read/write): 00:10:44.790 nvme0n1: ios=1667/2048, merge=0/0, ticks=1357/388, in_queue=1745, util=98.50% 00:10:44.790 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:44.790 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:44.790 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:44.790 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:44.790 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.048 rmmod nvme_tcp 00:10:45.048 rmmod nvme_fabrics 00:10:45.048 rmmod nvme_keyring 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1514395 ']' 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1514395 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1514395 ']' 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1514395 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1514395 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1514395' 00:10:45.048 killing process with pid 1514395 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1514395 00:10:45.048 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1514395 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.307 01:21:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.210 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.210 00:10:47.210 real 0m10.024s 00:10:47.210 user 0m22.403s 00:10:47.210 sys 0m2.543s 00:10:47.210 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.210 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:47.210 ************************************ 00:10:47.210 END TEST nvmf_nmic 00:10:47.210 ************************************ 00:10:47.210 01:21:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:47.210 01:21:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:47.210 01:21:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.210 01:21:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.469 ************************************ 00:10:47.469 START TEST nvmf_fio_target 00:10:47.469 ************************************ 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:47.469 * Looking for test storage... 00:10:47.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.469 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:47.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.470 --rc genhtml_branch_coverage=1 00:10:47.470 --rc genhtml_function_coverage=1 00:10:47.470 --rc genhtml_legend=1 00:10:47.470 --rc geninfo_all_blocks=1 00:10:47.470 --rc geninfo_unexecuted_blocks=1 00:10:47.470 00:10:47.470 ' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:47.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.470 --rc genhtml_branch_coverage=1 00:10:47.470 --rc genhtml_function_coverage=1 00:10:47.470 --rc genhtml_legend=1 00:10:47.470 --rc geninfo_all_blocks=1 00:10:47.470 --rc geninfo_unexecuted_blocks=1 00:10:47.470 00:10:47.470 ' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:47.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.470 --rc genhtml_branch_coverage=1 00:10:47.470 --rc genhtml_function_coverage=1 00:10:47.470 --rc genhtml_legend=1 00:10:47.470 --rc geninfo_all_blocks=1 00:10:47.470 --rc geninfo_unexecuted_blocks=1 00:10:47.470 00:10:47.470 ' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:47.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.470 --rc genhtml_branch_coverage=1 00:10:47.470 --rc genhtml_function_coverage=1 00:10:47.470 --rc genhtml_legend=1 00:10:47.470 --rc geninfo_all_blocks=1 00:10:47.470 --rc geninfo_unexecuted_blocks=1 00:10:47.470 00:10:47.470 ' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:47.470 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.471 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.471 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.471 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:47.471 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:47.471 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.471 01:21:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:50.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:50.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:50.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:50.002 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:50.002 01:21:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:50.002 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:50.002 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:50.002 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:50.002 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:50.002 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:50.002 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:50.002 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:50.002 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:50.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:10:50.002 00:10:50.002 --- 10.0.0.2 ping statistics --- 00:10:50.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.003 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:50.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:10:50.003 00:10:50.003 --- 10.0.0.1 ping statistics --- 00:10:50.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.003 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1517007 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1517007 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1517007 ']' 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.003 [2024-10-13 01:21:35.177421] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:10:50.003 [2024-10-13 01:21:35.177520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.003 [2024-10-13 01:21:35.247235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.003 [2024-10-13 01:21:35.296701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.003 [2024-10-13 01:21:35.296764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.003 [2024-10-13 01:21:35.296791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.003 [2024-10-13 01:21:35.296805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.003 [2024-10-13 01:21:35.296817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.003 [2024-10-13 01:21:35.298497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.003 [2024-10-13 01:21:35.298541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.003 [2024-10-13 01:21:35.298634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.003 [2024-10-13 01:21:35.298637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.003 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:50.260 [2024-10-13 01:21:35.748749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.260 01:21:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.518 01:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:50.518 01:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.083 01:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:51.083 01:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.341 01:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:51.341 01:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.599 01:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:51.599 01:21:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:51.856 01:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.114 01:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:52.114 01:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.372 01:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:52.372 01:21:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.629 01:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:52.630 01:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:52.887 01:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.145 01:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.145 01:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.402 01:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.402 01:21:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.660 01:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.918 [2024-10-13 01:21:39.415407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.918 01:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:54.175 01:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:54.433 01:21:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.367 01:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:55.367 01:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:55.367 01:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.367 01:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:55.367 01:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:55.367 01:21:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:57.264 01:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:57.264 01:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:57.264 01:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.264 01:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:57.264 01:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.264 01:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:57.264 01:21:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:57.264 [global] 00:10:57.264 thread=1 00:10:57.264 invalidate=1 00:10:57.264 rw=write 00:10:57.264 time_based=1 00:10:57.264 runtime=1 00:10:57.264 ioengine=libaio 00:10:57.264 direct=1 00:10:57.264 bs=4096 00:10:57.264 iodepth=1 00:10:57.264 norandommap=0 00:10:57.264 numjobs=1 00:10:57.264 00:10:57.264 verify_dump=1 00:10:57.264 verify_backlog=512 00:10:57.264 verify_state_save=0 00:10:57.264 do_verify=1 00:10:57.264 verify=crc32c-intel 00:10:57.264 [job0] 00:10:57.264 filename=/dev/nvme0n1 00:10:57.264 [job1] 00:10:57.264 filename=/dev/nvme0n2 00:10:57.264 [job2] 00:10:57.264 filename=/dev/nvme0n3 00:10:57.264 [job3] 00:10:57.264 filename=/dev/nvme0n4 00:10:57.264 Could not set queue depth (nvme0n1) 00:10:57.264 Could not set queue depth (nvme0n2) 00:10:57.264 Could not set queue depth (nvme0n3) 00:10:57.264 Could not set queue depth (nvme0n4) 00:10:57.522 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.522 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.522 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.522 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.522 fio-3.35 00:10:57.522 Starting 4 threads 00:10:58.893 00:10:58.893 job0: (groupid=0, jobs=1): err= 0: pid=1518080: Sun Oct 13 01:21:44 2024 00:10:58.893 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:10:58.893 slat (nsec): min=7041, max=33312, avg=23287.77, stdev=10024.24 00:10:58.893 clat (usec): min=376, max=43002, avg=39382.21, stdev=8730.48 00:10:58.893 lat (usec): min=391, max=43019, avg=39405.50, stdev=8732.37 00:10:58.893 clat percentiles (usec): 00:10:58.893 | 1.00th=[ 375], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:58.893 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:58.893 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:58.893 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:58.893 | 99.99th=[43254] 00:10:58.893 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:58.893 slat (nsec): min=7183, max=68000, avg=17729.42, stdev=7804.57 00:10:58.893 clat (usec): min=175, max=487, avg=253.14, stdev=41.21 00:10:58.893 lat (usec): min=188, max=511, avg=270.87, stdev=39.56 00:10:58.893 clat percentiles (usec): 00:10:58.893 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 223], 00:10:58.893 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 260], 00:10:58.893 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 322], 00:10:58.893 | 99.00th=[ 388], 99.50th=[ 453], 99.90th=[ 486], 99.95th=[ 486], 00:10:58.893 | 99.99th=[ 486] 00:10:58.893 bw ( KiB/s): min= 4096, max= 4096, per=25.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.893 lat (usec) : 250=50.19%, 500=45.88% 00:10:58.893 lat (msec) : 50=3.93% 00:10:58.893 cpu : usr=0.60%, sys=1.19%, ctx=534, majf=0, minf=1 00:10:58.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.893 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.893 job1: (groupid=0, jobs=1): err= 0: pid=1518090: Sun Oct 13 01:21:44 2024 00:10:58.893 read: IOPS=22, BW=91.9KiB/s (94.1kB/s)(92.0KiB/1001msec) 00:10:58.893 slat (nsec): min=7224, max=35744, avg=24705.91, stdev=10775.95 00:10:58.893 clat (usec): min=291, max=41990, avg=39492.96, stdev=8558.91 00:10:58.893 lat (usec): min=326, max=42003, avg=39517.67, stdev=8556.64 00:10:58.893 clat percentiles (usec): 00:10:58.893 | 1.00th=[ 293], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:58.893 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:58.893 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:58.893 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:58.893 | 99.99th=[42206] 00:10:58.893 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:10:58.893 slat (nsec): min=6261, max=41102, avg=14040.75, stdev=5869.77 00:10:58.893 clat (usec): min=138, max=248, avg=162.48, stdev=11.51 00:10:58.893 lat (usec): min=145, max=256, avg=176.52, stdev=12.55 00:10:58.893 clat percentiles (usec): 00:10:58.893 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:58.893 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:10:58.893 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:10:58.893 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 249], 99.95th=[ 249], 00:10:58.893 | 99.99th=[ 249] 00:10:58.893 bw ( KiB/s): min= 4096, max= 4096, per=25.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.893 lat (usec) : 250=95.70%, 500=0.19% 00:10:58.893 lat (msec) : 50=4.11% 00:10:58.893 cpu : usr=0.60%, sys=0.40%, ctx=536, majf=0, minf=1 00:10:58.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.893 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.893 job2: (groupid=0, jobs=1): err= 0: pid=1518122: Sun Oct 13 01:21:44 2024 00:10:58.893 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:10:58.893 slat (nsec): min=12966, max=36587, avg=25341.32, stdev=10516.68 00:10:58.893 clat (usec): min=40688, max=44996, avg=41134.06, stdev=865.17 00:10:58.893 lat (usec): min=40707, max=45019, avg=41159.40, stdev=864.49 00:10:58.893 clat percentiles (usec): 00:10:58.893 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:58.893 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:58.893 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:58.893 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:10:58.893 | 99.99th=[44827] 00:10:58.893 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:58.893 slat (nsec): min=6469, max=39931, avg=16933.54, stdev=6945.79 00:10:58.893 clat (usec): min=156, max=274, avg=202.40, stdev=17.10 00:10:58.893 lat (usec): min=164, max=287, avg=219.34, stdev=21.06 00:10:58.893 clat percentiles (usec): 00:10:58.893 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 190], 00:10:58.893 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:10:58.893 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 221], 95.00th=[ 225], 00:10:58.893 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 277], 99.95th=[ 277], 00:10:58.893 | 99.99th=[ 277] 00:10:58.893 bw ( KiB/s): min= 4096, max= 4096, per=25.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:58.893 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:58.893 lat (usec) : 250=94.76%, 500=1.12% 00:10:58.893 lat (msec) : 50=4.12% 00:10:58.893 cpu : usr=0.00%, sys=1.67%, ctx=535, majf=0, minf=1 00:10:58.893 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.893 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.893 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.893 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.893 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.893 job3: (groupid=0, jobs=1): err= 0: pid=1518135: Sun Oct 13 01:21:44 2024 00:10:58.893 read: IOPS=2295, BW=9183KiB/s (9403kB/s)(9192KiB/1001msec) 00:10:58.893 slat (nsec): min=4173, max=54383, avg=10370.85, stdev=6280.15 00:10:58.893 clat (usec): min=170, max=435, avg=205.37, stdev=22.68 00:10:58.893 lat (usec): min=175, max=457, avg=215.74, stdev=26.26 00:10:58.893 clat percentiles (usec): 00:10:58.893 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 00:10:58.893 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:10:58.893 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 255], 00:10:58.893 | 99.00th=[ 293], 99.50th=[ 322], 99.90th=[ 408], 99.95th=[ 424], 00:10:58.893 | 99.99th=[ 437] 00:10:58.893 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:58.893 slat (nsec): min=5363, max=68705, avg=13573.68, stdev=6387.38 00:10:58.893 clat (usec): min=128, max=449, avg=177.31, stdev=47.79 00:10:58.893 lat (usec): min=135, max=489, avg=190.88, stdev=49.52 00:10:58.894 clat percentiles (usec): 00:10:58.894 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:10:58.894 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 165], 00:10:58.894 | 70.00th=[ 190], 80.00th=[ 202], 90.00th=[ 245], 95.00th=[ 269], 00:10:58.894 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 412], 99.95th=[ 424], 00:10:58.894 | 99.99th=[ 449] 00:10:58.894 bw ( KiB/s): min=11528, max=11528, per=71.77%, avg=11528.00, stdev= 0.00, samples=1 00:10:58.894 iops : min= 2882, max= 2882, avg=2882.00, stdev= 0.00, samples=1 00:10:58.894 lat (usec) : 250=92.61%, 500=7.39% 00:10:58.894 cpu : usr=3.40%, sys=5.70%, ctx=4858, majf=0, minf=1 00:10:58.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:58.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.894 issued rwts: total=2298,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:58.894 00:10:58.894 Run status group 0 (all jobs): 00:10:58.894 READ: bw=9275KiB/s (9497kB/s), 86.3KiB/s-9183KiB/s (88.3kB/s-9403kB/s), io=9460KiB (9687kB), run=1001-1020msec 00:10:58.894 WRITE: bw=15.7MiB/s (16.4MB/s), 2008KiB/s-9.99MiB/s (2056kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1020msec 00:10:58.894 00:10:58.894 Disk stats (read/write): 00:10:58.894 nvme0n1: ios=68/512, merge=0/0, ticks=724/116, in_queue=840, util=86.27% 00:10:58.894 nvme0n2: ios=42/512, merge=0/0, ticks=1735/81, in_queue=1816, util=97.86% 00:10:58.894 nvme0n3: ios=41/512, merge=0/0, ticks=1688/107, in_queue=1795, util=97.80% 00:10:58.894 nvme0n4: ios=2048/2083, merge=0/0, ticks=402/347, in_queue=749, util=89.55% 00:10:58.894 01:21:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:58.894 [global] 00:10:58.894 thread=1 00:10:58.894 invalidate=1 00:10:58.894 rw=randwrite 00:10:58.894 time_based=1 00:10:58.894 runtime=1 00:10:58.894 ioengine=libaio 00:10:58.894 direct=1 00:10:58.894 bs=4096 00:10:58.894 iodepth=1 00:10:58.894 norandommap=0 00:10:58.894 numjobs=1 00:10:58.894 00:10:58.894 verify_dump=1 00:10:58.894 verify_backlog=512 00:10:58.894 verify_state_save=0 00:10:58.894 do_verify=1 00:10:58.894 verify=crc32c-intel 00:10:58.894 [job0] 00:10:58.894 filename=/dev/nvme0n1 00:10:58.894 [job1] 00:10:58.894 filename=/dev/nvme0n2 00:10:58.894 [job2] 00:10:58.894 filename=/dev/nvme0n3 00:10:58.894 [job3] 00:10:58.894 filename=/dev/nvme0n4 00:10:58.894 Could not set queue depth (nvme0n1) 00:10:58.894 Could not set queue depth (nvme0n2) 00:10:58.894 Could not set queue depth (nvme0n3) 00:10:58.894 Could not set queue depth (nvme0n4) 00:10:58.894 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.894 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.894 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.894 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.894 fio-3.35 00:10:58.894 Starting 4 threads 00:11:00.265 00:11:00.265 job0: (groupid=0, jobs=1): err= 0: pid=1518430: Sun Oct 13 01:21:45 2024 00:11:00.265 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:00.265 slat (nsec): min=6975, max=64704, avg=16488.71, stdev=8485.90 00:11:00.265 clat (usec): min=201, max=42011, avg=1567.27, stdev=7096.73 00:11:00.265 lat (usec): min=209, max=42029, avg=1583.76, stdev=7098.77 00:11:00.265 clat percentiles (usec): 00:11:00.265 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 235], 00:11:00.265 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:11:00.265 | 70.00th=[ 281], 80.00th=[ 404], 90.00th=[ 529], 95.00th=[ 578], 00:11:00.265 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.265 | 99.99th=[42206] 00:11:00.265 write: IOPS=873, BW=3493KiB/s (3576kB/s)(3496KiB/1001msec); 0 zone resets 00:11:00.265 slat (nsec): min=7927, max=70623, avg=16348.98, stdev=7854.44 00:11:00.265 clat (usec): min=143, max=383, avg=192.77, stdev=39.17 00:11:00.265 lat (usec): min=151, max=406, avg=209.12, stdev=44.10 00:11:00.265 clat percentiles (usec): 00:11:00.265 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:11:00.265 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:11:00.265 | 70.00th=[ 192], 80.00th=[ 208], 90.00th=[ 241], 95.00th=[ 293], 00:11:00.265 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 383], 99.95th=[ 383], 00:11:00.265 | 99.99th=[ 383] 00:11:00.265 bw ( KiB/s): min= 4096, max= 4096, per=21.45%, avg=4096.00, stdev= 0.00, samples=1 00:11:00.265 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:00.265 lat (usec) : 250=74.75%, 500=20.85%, 750=3.03%, 1000=0.07% 00:11:00.265 lat (msec) : 2=0.14%, 50=1.15% 00:11:00.265 cpu : usr=2.00%, sys=2.60%, ctx=1388, majf=0, minf=1 00:11:00.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.265 issued rwts: total=512,874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.265 job1: (groupid=0, jobs=1): err= 0: pid=1518431: Sun Oct 13 01:21:45 2024 00:11:00.265 read: IOPS=1534, BW=6139KiB/s (6287kB/s)(6164KiB/1004msec) 00:11:00.265 slat (nsec): min=5488, max=49408, avg=11941.77, stdev=5814.71 00:11:00.265 clat (usec): min=178, max=41034, avg=360.98, stdev=2319.63 00:11:00.265 lat (usec): min=185, max=41068, avg=372.92, stdev=2320.44 00:11:00.265 clat percentiles (usec): 00:11:00.265 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 198], 00:11:00.265 | 30.00th=[ 206], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 233], 00:11:00.265 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 269], 00:11:00.265 | 99.00th=[ 537], 99.50th=[ 603], 99.90th=[41157], 99.95th=[41157], 00:11:00.265 | 99.99th=[41157] 00:11:00.265 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:11:00.265 slat (nsec): min=7008, max=54591, avg=14775.96, stdev=6701.79 00:11:00.265 clat (usec): min=131, max=727, avg=187.77, stdev=31.87 00:11:00.265 lat (usec): min=138, max=737, avg=202.55, stdev=32.46 00:11:00.265 clat percentiles (usec): 00:11:00.265 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 161], 00:11:00.265 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:11:00.265 | 70.00th=[ 198], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 239], 00:11:00.265 | 99.00th=[ 262], 99.50th=[ 293], 99.90th=[ 343], 99.95th=[ 351], 00:11:00.265 | 99.99th=[ 725] 00:11:00.265 bw ( KiB/s): min= 6928, max= 9456, per=42.90%, avg=8192.00, stdev=1787.57, samples=2 00:11:00.265 iops : min= 1732, max= 2364, avg=2048.00, stdev=446.89, samples=2 00:11:00.265 lat (usec) : 250=93.93%, 500=5.46%, 750=0.45%, 1000=0.03% 00:11:00.265 lat (msec) : 50=0.14% 00:11:00.265 cpu : usr=3.59%, sys=6.58%, ctx=3590, majf=0, minf=1 00:11:00.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.265 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.265 job2: (groupid=0, jobs=1): err= 0: pid=1518432: Sun Oct 13 01:21:45 2024 00:11:00.265 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:11:00.265 slat (nsec): min=6720, max=36082, avg=26467.41, stdev=10148.34 00:11:00.265 clat (usec): min=40482, max=42010, avg=41169.90, stdev=469.69 00:11:00.265 lat (usec): min=40489, max=42034, avg=41196.37, stdev=471.62 00:11:00.265 clat percentiles (usec): 00:11:00.265 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:00.265 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:00.265 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:00.265 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.265 | 99.99th=[42206] 00:11:00.265 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:11:00.265 slat (nsec): min=7039, max=41960, avg=9985.42, stdev=4049.16 00:11:00.265 clat (usec): min=143, max=383, avg=227.39, stdev=25.09 00:11:00.265 lat (usec): min=153, max=391, avg=237.37, stdev=24.65 00:11:00.265 clat percentiles (usec): 00:11:00.265 | 1.00th=[ 169], 5.00th=[ 192], 10.00th=[ 202], 20.00th=[ 212], 00:11:00.265 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:11:00.265 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 277], 00:11:00.265 | 99.00th=[ 297], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 383], 00:11:00.265 | 99.99th=[ 383] 00:11:00.265 bw ( KiB/s): min= 4096, max= 4096, per=21.45%, avg=4096.00, stdev= 0.00, samples=1 00:11:00.265 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:00.265 lat (usec) : 250=87.08%, 500=8.80% 00:11:00.265 lat (msec) : 50=4.12% 00:11:00.265 cpu : usr=0.19%, sys=0.58%, ctx=536, majf=0, minf=1 00:11:00.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.265 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.265 job3: (groupid=0, jobs=1): err= 0: pid=1518433: Sun Oct 13 01:21:45 2024 00:11:00.265 read: IOPS=1205, BW=4822KiB/s (4938kB/s)(5020KiB/1041msec) 00:11:00.265 slat (nsec): min=5499, max=40729, avg=11584.01, stdev=5545.31 00:11:00.265 clat (usec): min=197, max=42018, avg=537.34, stdev=3256.23 00:11:00.265 lat (usec): min=205, max=42035, avg=548.93, stdev=3256.93 00:11:00.265 clat percentiles (usec): 00:11:00.265 | 1.00th=[ 206], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 245], 00:11:00.265 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:11:00.265 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 400], 00:11:00.265 | 99.00th=[ 594], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:11:00.265 | 99.99th=[42206] 00:11:00.265 write: IOPS=1475, BW=5902KiB/s (6044kB/s)(6144KiB/1041msec); 0 zone resets 00:11:00.265 slat (nsec): min=7141, max=99751, avg=16799.38, stdev=6211.73 00:11:00.265 clat (usec): min=150, max=518, avg=204.61, stdev=22.94 00:11:00.265 lat (usec): min=159, max=538, avg=221.41, stdev=23.24 00:11:00.265 clat percentiles (usec): 00:11:00.266 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:11:00.266 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:11:00.266 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 243], 00:11:00.266 | 99.00th=[ 285], 99.50th=[ 306], 99.90th=[ 326], 99.95th=[ 519], 00:11:00.266 | 99.99th=[ 519] 00:11:00.266 bw ( KiB/s): min= 4096, max= 8192, per=32.17%, avg=6144.00, stdev=2896.31, samples=2 00:11:00.266 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:00.266 lat (usec) : 250=64.56%, 500=34.25%, 750=0.86% 00:11:00.266 lat (msec) : 4=0.04%, 50=0.29% 00:11:00.266 cpu : usr=2.88%, sys=5.19%, ctx=2792, majf=0, minf=1 00:11:00.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.266 issued rwts: total=1255,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.266 00:11:00.266 Run status group 0 (all jobs): 00:11:00.266 READ: bw=12.5MiB/s (13.1MB/s), 85.4KiB/s-6139KiB/s (87.5kB/s-6287kB/s), io=13.0MiB (13.6MB), run=1001-1041msec 00:11:00.266 WRITE: bw=18.6MiB/s (19.6MB/s), 1988KiB/s-8159KiB/s (2036kB/s-8355kB/s), io=19.4MiB (20.4MB), run=1001-1041msec 00:11:00.266 00:11:00.266 Disk stats (read/write): 00:11:00.266 nvme0n1: ios=359/512, merge=0/0, ticks=1593/85, in_queue=1678, util=98.20% 00:11:00.266 nvme0n2: ios=1560/2048, merge=0/0, ticks=380/360, in_queue=740, util=87.21% 00:11:00.266 nvme0n3: ios=41/512, merge=0/0, ticks=1684/111, in_queue=1795, util=98.44% 00:11:00.266 nvme0n4: ios=1274/1536, merge=0/0, ticks=635/288, in_queue=923, util=90.77% 00:11:00.266 01:21:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:00.266 [global] 00:11:00.266 thread=1 00:11:00.266 invalidate=1 00:11:00.266 rw=write 00:11:00.266 time_based=1 00:11:00.266 runtime=1 00:11:00.266 ioengine=libaio 00:11:00.266 direct=1 00:11:00.266 bs=4096 00:11:00.266 iodepth=128 00:11:00.266 norandommap=0 00:11:00.266 numjobs=1 00:11:00.266 00:11:00.266 verify_dump=1 00:11:00.266 verify_backlog=512 00:11:00.266 verify_state_save=0 00:11:00.266 do_verify=1 00:11:00.266 verify=crc32c-intel 00:11:00.266 [job0] 00:11:00.266 filename=/dev/nvme0n1 00:11:00.266 [job1] 00:11:00.266 filename=/dev/nvme0n2 00:11:00.266 [job2] 00:11:00.266 filename=/dev/nvme0n3 00:11:00.266 [job3] 00:11:00.266 filename=/dev/nvme0n4 00:11:00.266 Could not set queue depth (nvme0n1) 00:11:00.266 Could not set queue depth (nvme0n2) 00:11:00.266 Could not set queue depth (nvme0n3) 00:11:00.266 Could not set queue depth (nvme0n4) 00:11:00.524 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.524 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.524 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.524 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.524 fio-3.35 00:11:00.524 Starting 4 threads 00:11:01.897 00:11:01.897 job0: (groupid=0, jobs=1): err= 0: pid=1518661: Sun Oct 13 01:21:47 2024 00:11:01.897 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:11:01.897 slat (nsec): min=1922, max=18355k, avg=119436.17, stdev=892136.11 00:11:01.897 clat (usec): min=2792, max=59451, avg=16109.33, stdev=8311.46 00:11:01.897 lat (usec): min=2805, max=59458, avg=16228.77, stdev=8396.74 00:11:01.897 clat percentiles (usec): 00:11:01.897 | 1.00th=[ 5407], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10552], 00:11:01.897 | 30.00th=[10945], 40.00th=[12518], 50.00th=[13304], 60.00th=[13960], 00:11:01.897 | 70.00th=[17171], 80.00th=[21365], 90.00th=[24773], 95.00th=[33817], 00:11:01.897 | 99.00th=[55313], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:11:01.897 | 99.99th=[59507] 00:11:01.897 write: IOPS=4112, BW=16.1MiB/s (16.8MB/s)(16.3MiB/1013msec); 0 zone resets 00:11:01.897 slat (usec): min=2, max=13615, avg=89.03, stdev=704.41 00:11:01.897 clat (usec): min=346, max=78559, avg=15046.66, stdev=11801.58 00:11:01.897 lat (usec): min=554, max=78565, avg=15135.69, stdev=11850.03 00:11:01.897 clat percentiles (usec): 00:11:01.897 | 1.00th=[ 725], 5.00th=[ 2868], 10.00th=[ 4359], 20.00th=[ 8029], 00:11:01.897 | 30.00th=[10028], 40.00th=[10814], 50.00th=[11863], 60.00th=[13698], 00:11:01.897 | 70.00th=[16057], 80.00th=[20317], 90.00th=[24511], 95.00th=[41681], 00:11:01.897 | 99.00th=[62129], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:11:01.897 | 99.99th=[78119] 00:11:01.897 bw ( KiB/s): min=12288, max=20480, per=29.01%, avg=16384.00, stdev=5792.62, samples=2 00:11:01.897 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:11:01.897 lat (usec) : 500=0.01%, 750=0.87%, 1000=0.96% 00:11:01.897 lat (msec) : 4=3.06%, 10=14.77%, 20=58.41%, 50=19.89%, 100=2.03% 00:11:01.897 cpu : usr=4.64%, sys=6.62%, ctx=303, majf=0, minf=1 00:11:01.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:01.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.897 issued rwts: total=4096,4166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.897 job1: (groupid=0, jobs=1): err= 0: pid=1518662: Sun Oct 13 01:21:47 2024 00:11:01.897 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:11:01.897 slat (usec): min=2, max=18417, avg=111.12, stdev=822.08 00:11:01.897 clat (usec): min=3966, max=41280, avg=13774.97, stdev=5477.04 00:11:01.897 lat (usec): min=3970, max=41322, avg=13886.09, stdev=5549.25 00:11:01.897 clat percentiles (usec): 00:11:01.897 | 1.00th=[ 6325], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10159], 00:11:01.897 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[12387], 00:11:01.897 | 70.00th=[13960], 80.00th=[19006], 90.00th=[23200], 95.00th=[24511], 00:11:01.897 | 99.00th=[32113], 99.50th=[34341], 99.90th=[34341], 99.95th=[34866], 00:11:01.897 | 99.99th=[41157] 00:11:01.897 write: IOPS=4001, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1011msec); 0 zone resets 00:11:01.897 slat (usec): min=3, max=11816, avg=129.32, stdev=690.76 00:11:01.897 clat (usec): min=924, max=73469, avg=19494.77, stdev=15281.92 00:11:01.897 lat (usec): min=933, max=73481, avg=19624.09, stdev=15369.71 00:11:01.897 clat percentiles (usec): 00:11:01.897 | 1.00th=[ 4146], 5.00th=[ 6128], 10.00th=[ 8586], 20.00th=[10421], 00:11:01.897 | 30.00th=[10683], 40.00th=[11469], 50.00th=[11731], 60.00th=[12911], 00:11:01.897 | 70.00th=[21890], 80.00th=[26870], 90.00th=[40109], 95.00th=[57934], 00:11:01.897 | 99.00th=[70779], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:11:01.897 | 99.99th=[73925] 00:11:01.897 bw ( KiB/s): min=10872, max=20480, per=27.76%, avg=15676.00, stdev=6793.88, samples=2 00:11:01.897 iops : min= 2718, max= 5120, avg=3919.00, stdev=1698.47, samples=2 00:11:01.897 lat (usec) : 1000=0.05% 00:11:01.897 lat (msec) : 4=0.50%, 10=16.41%, 20=56.32%, 50=22.80%, 100=3.92% 00:11:01.897 cpu : usr=4.55%, sys=7.92%, ctx=409, majf=0, minf=1 00:11:01.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:01.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.897 issued rwts: total=3584,4046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.897 job2: (groupid=0, jobs=1): err= 0: pid=1518663: Sun Oct 13 01:21:47 2024 00:11:01.897 read: IOPS=2575, BW=10.1MiB/s (10.5MB/s)(10.5MiB/1043msec) 00:11:01.897 slat (usec): min=2, max=46430, avg=149.38, stdev=1271.51 00:11:01.897 clat (msec): min=9, max=103, avg=22.02, stdev=16.10 00:11:01.897 lat (msec): min=9, max=103, avg=22.17, stdev=16.19 00:11:01.897 clat percentiles (msec): 00:11:01.897 | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 14], 00:11:01.897 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 17], 00:11:01.897 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 51], 95.00th=[ 65], 00:11:01.897 | 99.00th=[ 81], 99.50th=[ 81], 99.90th=[ 81], 99.95th=[ 83], 00:11:01.897 | 99.99th=[ 104] 00:11:01.897 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:11:01.897 slat (usec): min=4, max=22463, avg=182.70, stdev=1003.21 00:11:01.897 clat (usec): min=7329, max=65779, avg=23581.96, stdev=10987.49 00:11:01.897 lat (usec): min=7346, max=65820, avg=23764.66, stdev=11058.54 00:11:01.897 clat percentiles (usec): 00:11:01.897 | 1.00th=[12780], 5.00th=[13304], 10.00th=[13829], 20.00th=[14746], 00:11:01.897 | 30.00th=[15664], 40.00th=[18744], 50.00th=[21103], 60.00th=[22152], 00:11:01.897 | 70.00th=[25297], 80.00th=[29230], 90.00th=[40633], 95.00th=[52691], 00:11:01.897 | 99.00th=[56886], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:11:01.897 | 99.99th=[65799] 00:11:01.897 bw ( KiB/s): min=12280, max=12288, per=21.75%, avg=12284.00, stdev= 5.66, samples=2 00:11:01.897 iops : min= 3070, max= 3072, avg=3071.00, stdev= 1.41, samples=2 00:11:01.897 lat (msec) : 10=0.76%, 20=58.96%, 50=32.56%, 100=7.69%, 250=0.02% 00:11:01.897 cpu : usr=4.61%, sys=7.77%, ctx=322, majf=0, minf=1 00:11:01.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:01.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.897 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.897 job3: (groupid=0, jobs=1): err= 0: pid=1518664: Sun Oct 13 01:21:47 2024 00:11:01.897 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:11:01.897 slat (usec): min=2, max=18575, avg=148.81, stdev=1023.33 00:11:01.897 clat (usec): min=5246, max=63650, avg=18280.76, stdev=11473.27 00:11:01.897 lat (usec): min=5266, max=63659, avg=18429.56, stdev=11550.73 00:11:01.897 clat percentiles (usec): 00:11:01.897 | 1.00th=[ 6718], 5.00th=[10159], 10.00th=[10552], 20.00th=[11207], 00:11:01.897 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13566], 60.00th=[15008], 00:11:01.897 | 70.00th=[17433], 80.00th=[23462], 90.00th=[35914], 95.00th=[44827], 00:11:01.897 | 99.00th=[63701], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:11:01.897 | 99.99th=[63701] 00:11:01.897 write: IOPS=3408, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1010msec); 0 zone resets 00:11:01.897 slat (usec): min=3, max=25476, avg=149.10, stdev=931.43 00:11:01.897 clat (usec): min=2493, max=58563, avg=20452.06, stdev=11315.55 00:11:01.897 lat (usec): min=2693, max=58578, avg=20601.16, stdev=11398.69 00:11:01.897 clat percentiles (usec): 00:11:01.897 | 1.00th=[ 4359], 5.00th=[ 6849], 10.00th=[10290], 20.00th=[11207], 00:11:01.897 | 30.00th=[11600], 40.00th=[12649], 50.00th=[16450], 60.00th=[20317], 00:11:01.898 | 70.00th=[27657], 80.00th=[31589], 90.00th=[38011], 95.00th=[43779], 00:11:01.898 | 99.00th=[45351], 99.50th=[45876], 99.90th=[47449], 99.95th=[47449], 00:11:01.898 | 99.99th=[58459] 00:11:01.898 bw ( KiB/s): min=12288, max=14232, per=23.48%, avg=13260.00, stdev=1374.62, samples=2 00:11:01.898 iops : min= 3072, max= 3558, avg=3315.00, stdev=343.65, samples=2 00:11:01.898 lat (msec) : 4=0.23%, 10=6.29%, 20=59.77%, 50=32.16%, 100=1.55% 00:11:01.898 cpu : usr=4.46%, sys=7.43%, ctx=369, majf=0, minf=1 00:11:01.898 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:01.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:01.898 issued rwts: total=3072,3443,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:01.898 00:11:01.898 Run status group 0 (all jobs): 00:11:01.898 READ: bw=50.3MiB/s (52.8MB/s), 10.1MiB/s-15.8MiB/s (10.5MB/s-16.6MB/s), io=52.5MiB (55.0MB), run=1010-1043msec 00:11:01.898 WRITE: bw=55.2MiB/s (57.8MB/s), 11.5MiB/s-16.1MiB/s (12.1MB/s-16.8MB/s), io=57.5MiB (60.3MB), run=1010-1043msec 00:11:01.898 00:11:01.898 Disk stats (read/write): 00:11:01.898 nvme0n1: ios=3634/3879, merge=0/0, ticks=41408/41904, in_queue=83312, util=86.77% 00:11:01.898 nvme0n2: ios=3111/3399, merge=0/0, ticks=40777/61713, in_queue=102490, util=96.65% 00:11:01.898 nvme0n3: ios=2420/2560, merge=0/0, ticks=23106/27465, in_queue=50571, util=88.95% 00:11:01.898 nvme0n4: ios=2560/2735, merge=0/0, ticks=32142/42694, in_queue=74836, util=88.34% 00:11:01.898 01:21:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:01.898 [global] 00:11:01.898 thread=1 00:11:01.898 invalidate=1 00:11:01.898 rw=randwrite 00:11:01.898 time_based=1 00:11:01.898 runtime=1 00:11:01.898 ioengine=libaio 00:11:01.898 direct=1 00:11:01.898 bs=4096 00:11:01.898 iodepth=128 00:11:01.898 norandommap=0 00:11:01.898 numjobs=1 00:11:01.898 00:11:01.898 verify_dump=1 00:11:01.898 verify_backlog=512 00:11:01.898 verify_state_save=0 00:11:01.898 do_verify=1 00:11:01.898 verify=crc32c-intel 00:11:01.898 [job0] 00:11:01.898 filename=/dev/nvme0n1 00:11:01.898 [job1] 00:11:01.898 filename=/dev/nvme0n2 00:11:01.898 [job2] 00:11:01.898 filename=/dev/nvme0n3 00:11:01.898 [job3] 00:11:01.898 filename=/dev/nvme0n4 00:11:01.898 Could not set queue depth (nvme0n1) 00:11:01.898 Could not set queue depth (nvme0n2) 00:11:01.898 Could not set queue depth (nvme0n3) 00:11:01.898 Could not set queue depth (nvme0n4) 00:11:01.898 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.898 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.898 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.898 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:01.898 fio-3.35 00:11:01.898 Starting 4 threads 00:11:03.271 00:11:03.271 job0: (groupid=0, jobs=1): err= 0: pid=1518897: Sun Oct 13 01:21:48 2024 00:11:03.271 read: IOPS=3391, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1003msec) 00:11:03.271 slat (usec): min=3, max=19934, avg=127.48, stdev=768.84 00:11:03.271 clat (usec): min=1950, max=57822, avg=15238.64, stdev=5881.11 00:11:03.271 lat (usec): min=1957, max=57841, avg=15366.12, stdev=5952.53 00:11:03.271 clat percentiles (usec): 00:11:03.271 | 1.00th=[ 7046], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10945], 00:11:03.271 | 30.00th=[12256], 40.00th=[13960], 50.00th=[14746], 60.00th=[15533], 00:11:03.271 | 70.00th=[16909], 80.00th=[18220], 90.00th=[20579], 95.00th=[21627], 00:11:03.271 | 99.00th=[46400], 99.50th=[53216], 99.90th=[57934], 99.95th=[57934], 00:11:03.271 | 99.99th=[57934] 00:11:03.271 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:11:03.271 slat (usec): min=4, max=14663, avg=147.41, stdev=839.84 00:11:03.271 clat (usec): min=6579, max=77515, avg=20834.68, stdev=13162.08 00:11:03.271 lat (usec): min=6593, max=77551, avg=20982.09, stdev=13250.20 00:11:03.271 clat percentiles (usec): 00:11:03.271 | 1.00th=[ 8356], 5.00th=[10683], 10.00th=[11207], 20.00th=[11863], 00:11:03.271 | 30.00th=[13960], 40.00th=[14484], 50.00th=[16909], 60.00th=[20579], 00:11:03.271 | 70.00th=[21890], 80.00th=[23987], 90.00th=[30278], 95.00th=[58459], 00:11:03.271 | 99.00th=[70779], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:11:03.271 | 99.99th=[77071] 00:11:03.271 bw ( KiB/s): min=12288, max=16384, per=25.21%, avg=14336.00, stdev=2896.31, samples=2 00:11:03.271 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:03.271 lat (msec) : 2=0.09%, 4=0.33%, 10=5.63%, 20=67.16%, 50=22.70% 00:11:03.271 lat (msec) : 100=4.09% 00:11:03.271 cpu : usr=4.89%, sys=7.78%, ctx=390, majf=0, minf=1 00:11:03.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:03.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.271 issued rwts: total=3402,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.271 job1: (groupid=0, jobs=1): err= 0: pid=1518898: Sun Oct 13 01:21:48 2024 00:11:03.271 read: IOPS=2070, BW=8281KiB/s (8480kB/s)(8364KiB/1010msec) 00:11:03.271 slat (usec): min=2, max=37524, avg=274.83, stdev=2058.06 00:11:03.271 clat (usec): min=1694, max=93049, avg=33021.26, stdev=20988.24 00:11:03.271 lat (usec): min=6355, max=93083, avg=33296.09, stdev=21134.01 00:11:03.271 clat percentiles (usec): 00:11:03.271 | 1.00th=[ 6652], 5.00th=[11338], 10.00th=[14091], 20.00th=[15664], 00:11:03.271 | 30.00th=[17171], 40.00th=[18220], 50.00th=[20841], 60.00th=[32637], 00:11:03.271 | 70.00th=[47449], 80.00th=[54264], 90.00th=[65274], 95.00th=[74974], 00:11:03.271 | 99.00th=[76022], 99.50th=[76022], 99.90th=[92799], 99.95th=[92799], 00:11:03.271 | 99.99th=[92799] 00:11:03.271 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:11:03.271 slat (usec): min=3, max=16313, avg=150.63, stdev=1022.67 00:11:03.271 clat (usec): min=6846, max=78169, avg=22818.90, stdev=12203.10 00:11:03.271 lat (usec): min=6857, max=78180, avg=22969.53, stdev=12265.54 00:11:03.271 clat percentiles (usec): 00:11:03.271 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11994], 00:11:03.271 | 30.00th=[15270], 40.00th=[18220], 50.00th=[19792], 60.00th=[22152], 00:11:03.271 | 70.00th=[25035], 80.00th=[28705], 90.00th=[41157], 95.00th=[55313], 00:11:03.271 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[62129], 00:11:03.271 | 99.99th=[78119] 00:11:03.271 bw ( KiB/s): min= 9488, max=10312, per=17.41%, avg=9900.00, stdev=582.66, samples=2 00:11:03.271 iops : min= 2372, max= 2578, avg=2475.00, stdev=145.66, samples=2 00:11:03.271 lat (msec) : 2=0.02%, 10=7.46%, 20=44.68%, 50=33.05%, 100=14.79% 00:11:03.271 cpu : usr=2.18%, sys=2.78%, ctx=178, majf=0, minf=1 00:11:03.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:11:03.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.271 issued rwts: total=2091,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.271 job2: (groupid=0, jobs=1): err= 0: pid=1518899: Sun Oct 13 01:21:48 2024 00:11:03.271 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:11:03.271 slat (usec): min=2, max=16103, avg=121.31, stdev=770.35 00:11:03.271 clat (usec): min=4737, max=39775, avg=15851.32, stdev=5501.06 00:11:03.271 lat (usec): min=4740, max=39786, avg=15972.63, stdev=5545.32 00:11:03.271 clat percentiles (usec): 00:11:03.271 | 1.00th=[ 8291], 5.00th=[10028], 10.00th=[11600], 20.00th=[12649], 00:11:03.271 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[15139], 00:11:03.271 | 70.00th=[16450], 80.00th=[18482], 90.00th=[21627], 95.00th=[23200], 00:11:03.271 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39584], 99.95th=[39584], 00:11:03.271 | 99.99th=[39584] 00:11:03.271 write: IOPS=3599, BW=14.1MiB/s (14.7MB/s)(14.1MiB/1002msec); 0 zone resets 00:11:03.271 slat (usec): min=3, max=13779, avg=138.27, stdev=891.25 00:11:03.271 clat (usec): min=305, max=81471, avg=19042.84, stdev=11875.85 00:11:03.271 lat (usec): min=1711, max=81484, avg=19181.11, stdev=11909.94 00:11:03.271 clat percentiles (usec): 00:11:03.271 | 1.00th=[ 5014], 5.00th=[ 9372], 10.00th=[11731], 20.00th=[12911], 00:11:03.271 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[15401], 00:11:03.271 | 70.00th=[21103], 80.00th=[25035], 90.00th=[32113], 95.00th=[43254], 00:11:03.271 | 99.00th=[71828], 99.50th=[78119], 99.90th=[81265], 99.95th=[81265], 00:11:03.271 | 99.99th=[81265] 00:11:03.271 bw ( KiB/s): min=12288, max=16384, per=25.21%, avg=14336.00, stdev=2896.31, samples=2 00:11:03.271 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:03.271 lat (usec) : 500=0.01% 00:11:03.271 lat (msec) : 2=0.14%, 4=0.01%, 10=5.62%, 20=69.81%, 50=22.89% 00:11:03.271 lat (msec) : 100=1.52% 00:11:03.271 cpu : usr=3.90%, sys=4.60%, ctx=275, majf=0, minf=1 00:11:03.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:03.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.271 issued rwts: total=3584,3607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.271 job3: (groupid=0, jobs=1): err= 0: pid=1518900: Sun Oct 13 01:21:48 2024 00:11:03.271 read: IOPS=4526, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1009msec) 00:11:03.271 slat (usec): min=2, max=14770, avg=108.42, stdev=740.16 00:11:03.271 clat (usec): min=3734, max=51311, avg=13985.56, stdev=5522.66 00:11:03.271 lat (usec): min=3746, max=51318, avg=14093.98, stdev=5572.64 00:11:03.271 clat percentiles (usec): 00:11:03.271 | 1.00th=[ 6390], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10814], 00:11:03.271 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:11:03.271 | 70.00th=[13960], 80.00th=[15270], 90.00th=[19268], 95.00th=[25560], 00:11:03.271 | 99.00th=[36439], 99.50th=[45351], 99.90th=[51119], 99.95th=[51119], 00:11:03.271 | 99.99th=[51119] 00:11:03.271 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:11:03.271 slat (usec): min=3, max=21639, avg=98.16, stdev=613.20 00:11:03.271 clat (usec): min=1428, max=51312, avg=13804.44, stdev=6642.76 00:11:03.271 lat (usec): min=1448, max=51324, avg=13902.60, stdev=6698.02 00:11:03.271 clat percentiles (usec): 00:11:03.271 | 1.00th=[ 4621], 5.00th=[ 6783], 10.00th=[ 8586], 20.00th=[10028], 00:11:03.271 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12387], 00:11:03.271 | 70.00th=[12780], 80.00th=[18220], 90.00th=[23200], 95.00th=[28181], 00:11:03.271 | 99.00th=[39060], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:03.271 | 99.99th=[51119] 00:11:03.271 bw ( KiB/s): min=16384, max=20480, per=32.41%, avg=18432.00, stdev=2896.31, samples=2 00:11:03.271 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:11:03.271 lat (msec) : 2=0.02%, 4=0.34%, 10=12.81%, 20=75.68%, 50=10.99% 00:11:03.271 lat (msec) : 100=0.16% 00:11:03.271 cpu : usr=6.65%, sys=10.52%, ctx=412, majf=0, minf=1 00:11:03.271 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:03.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.271 issued rwts: total=4567,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.271 00:11:03.271 Run status group 0 (all jobs): 00:11:03.271 READ: bw=52.8MiB/s (55.3MB/s), 8281KiB/s-17.7MiB/s (8480kB/s-18.5MB/s), io=53.3MiB (55.9MB), run=1002-1010msec 00:11:03.271 WRITE: bw=55.5MiB/s (58.2MB/s), 9.90MiB/s-17.8MiB/s (10.4MB/s-18.7MB/s), io=56.1MiB (58.8MB), run=1002-1010msec 00:11:03.271 00:11:03.271 Disk stats (read/write): 00:11:03.271 nvme0n1: ios=2595/2879, merge=0/0, ticks=22264/28887, in_queue=51151, util=96.39% 00:11:03.271 nvme0n2: ios=1690/2048, merge=0/0, ticks=23955/16066, in_queue=40021, util=95.74% 00:11:03.271 nvme0n3: ios=2853/3072, merge=0/0, ticks=27816/31625, in_queue=59441, util=99.69% 00:11:03.271 nvme0n4: ios=4154/4223, merge=0/0, ticks=51147/47598, in_queue=98745, util=98.01% 00:11:03.271 01:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:03.272 01:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1519036 00:11:03.272 01:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:03.272 01:21:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:03.272 [global] 00:11:03.272 thread=1 00:11:03.272 invalidate=1 00:11:03.272 rw=read 00:11:03.272 time_based=1 00:11:03.272 runtime=10 00:11:03.272 ioengine=libaio 00:11:03.272 direct=1 00:11:03.272 bs=4096 00:11:03.272 iodepth=1 00:11:03.272 norandommap=1 00:11:03.272 numjobs=1 00:11:03.272 00:11:03.272 [job0] 00:11:03.272 filename=/dev/nvme0n1 00:11:03.272 [job1] 00:11:03.272 filename=/dev/nvme0n2 00:11:03.272 [job2] 00:11:03.272 filename=/dev/nvme0n3 00:11:03.272 [job3] 00:11:03.272 filename=/dev/nvme0n4 00:11:03.272 Could not set queue depth (nvme0n1) 00:11:03.272 Could not set queue depth (nvme0n2) 00:11:03.272 Could not set queue depth (nvme0n3) 00:11:03.272 Could not set queue depth (nvme0n4) 00:11:03.272 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.272 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.272 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.272 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.272 fio-3.35 00:11:03.272 Starting 4 threads 00:11:06.552 01:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:06.552 01:21:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:06.552 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=19775488, buflen=4096 00:11:06.552 fio: pid=1519250, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:06.809 01:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.809 01:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:06.809 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=1515520, buflen=4096 00:11:06.809 fio: pid=1519249, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.068 01:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.068 01:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:07.068 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=54980608, buflen=4096 00:11:07.068 fio: pid=1519247, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.326 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=10330112, buflen=4096 00:11:07.326 fio: pid=1519248, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:07.326 01:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.326 01:21:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:07.326 00:11:07.326 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1519247: Sun Oct 13 01:21:52 2024 00:11:07.326 read: IOPS=3769, BW=14.7MiB/s (15.4MB/s)(52.4MiB/3561msec) 00:11:07.326 slat (usec): min=3, max=24228, avg=12.63, stdev=231.22 00:11:07.326 clat (usec): min=160, max=41662, avg=248.39, stdev=1116.33 00:11:07.326 lat (usec): min=164, max=41666, avg=261.02, stdev=1140.43 00:11:07.326 clat percentiles (usec): 00:11:07.326 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:11:07.326 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:11:07.326 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 281], 95.00th=[ 330], 00:11:07.326 | 99.00th=[ 404], 99.50th=[ 453], 99.90th=[ 898], 99.95th=[41157], 00:11:07.326 | 99.99th=[41157] 00:11:07.326 bw ( KiB/s): min= 9568, max=19320, per=73.45%, avg=16256.00, stdev=3514.58, samples=6 00:11:07.326 iops : min= 2392, max= 4830, avg=4064.00, stdev=878.65, samples=6 00:11:07.326 lat (usec) : 250=84.89%, 500=14.85%, 750=0.16%, 1000=0.01% 00:11:07.326 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 50=0.07% 00:11:07.326 cpu : usr=1.57%, sys=4.21%, ctx=13428, majf=0, minf=2 00:11:07.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.326 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.326 issued rwts: total=13424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.326 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1519248: Sun Oct 13 01:21:52 2024 00:11:07.326 read: IOPS=660, BW=2640KiB/s (2704kB/s)(9.85MiB/3821msec) 00:11:07.326 slat (usec): min=3, max=15072, avg=26.41, stdev=446.57 00:11:07.326 clat (usec): min=164, max=45922, avg=1475.15, stdev=6983.07 00:11:07.326 lat (usec): min=175, max=50987, avg=1501.56, stdev=7017.79 00:11:07.326 clat percentiles (usec): 00:11:07.326 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:11:07.326 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 245], 00:11:07.326 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 424], 95.00th=[ 510], 00:11:07.326 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.326 | 99.99th=[45876] 00:11:07.326 bw ( KiB/s): min= 96, max=13076, per=8.82%, avg=1952.57, stdev=4904.97, samples=7 00:11:07.326 iops : min= 24, max= 3269, avg=488.14, stdev=1226.24, samples=7 00:11:07.326 lat (usec) : 250=62.70%, 500=31.47%, 750=2.73% 00:11:07.326 lat (msec) : 2=0.04%, 20=0.04%, 50=2.97% 00:11:07.326 cpu : usr=0.24%, sys=0.52%, ctx=2528, majf=0, minf=2 00:11:07.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.326 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.326 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.326 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1519249: Sun Oct 13 01:21:52 2024 00:11:07.326 read: IOPS=113, BW=452KiB/s (462kB/s)(1480KiB/3277msec) 00:11:07.326 slat (usec): min=4, max=7915, avg=35.46, stdev=410.31 00:11:07.326 clat (usec): min=197, max=42098, avg=8756.16, stdev=16793.34 00:11:07.326 lat (usec): min=207, max=49017, avg=8791.66, stdev=16843.62 00:11:07.326 clat percentiles (usec): 00:11:07.326 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 212], 00:11:07.326 | 30.00th=[ 217], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:11:07.326 | 70.00th=[ 239], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:07.326 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:07.326 | 99.99th=[42206] 00:11:07.326 bw ( KiB/s): min= 96, max= 2336, per=2.19%, avg=485.33, stdev=906.93, samples=6 00:11:07.326 iops : min= 24, max= 584, avg=121.33, stdev=226.73, samples=6 00:11:07.326 lat (usec) : 250=72.51%, 500=6.47% 00:11:07.326 lat (msec) : 2=0.27%, 50=20.49% 00:11:07.326 cpu : usr=0.06%, sys=0.21%, ctx=372, majf=0, minf=1 00:11:07.326 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.326 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.326 issued rwts: total=371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.326 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.327 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1519250: Sun Oct 13 01:21:52 2024 00:11:07.327 read: IOPS=1634, BW=6538KiB/s (6694kB/s)(18.9MiB/2954msec) 00:11:07.327 slat (nsec): min=5473, max=55079, avg=14007.05, stdev=5795.11 00:11:07.327 clat (usec): min=194, max=41379, avg=589.64, stdev=3499.34 00:11:07.327 lat (usec): min=201, max=41398, avg=603.65, stdev=3499.84 00:11:07.327 clat percentiles (usec): 00:11:07.327 | 1.00th=[ 219], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 265], 00:11:07.327 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:11:07.327 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 334], 00:11:07.327 | 99.00th=[ 379], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:07.327 | 99.99th=[41157] 00:11:07.327 bw ( KiB/s): min= 192, max=13336, per=34.81%, avg=7705.60, stdev=4924.87, samples=5 00:11:07.327 iops : min= 48, max= 3334, avg=1926.40, stdev=1231.22, samples=5 00:11:07.327 lat (usec) : 250=8.84%, 500=90.39% 00:11:07.327 lat (msec) : 50=0.75% 00:11:07.327 cpu : usr=1.69%, sys=3.35%, ctx=4829, majf=0, minf=1 00:11:07.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.327 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.327 issued rwts: total=4829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.327 00:11:07.327 Run status group 0 (all jobs): 00:11:07.327 READ: bw=21.6MiB/s (22.7MB/s), 452KiB/s-14.7MiB/s (462kB/s-15.4MB/s), io=82.6MiB (86.6MB), run=2954-3821msec 00:11:07.327 00:11:07.327 Disk stats (read/write): 00:11:07.327 nvme0n1: ios=13418/0, merge=0/0, ticks=3006/0, in_queue=3006, util=94.91% 00:11:07.327 nvme0n2: ios=2042/0, merge=0/0, ticks=3536/0, in_queue=3536, util=95.85% 00:11:07.327 nvme0n3: ios=366/0, merge=0/0, ticks=3073/0, in_queue=3073, util=96.79% 00:11:07.327 nvme0n4: ios=4826/0, merge=0/0, ticks=2697/0, in_queue=2697, util=96.72% 00:11:07.585 01:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.585 01:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:07.842 01:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.842 01:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:08.100 01:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.100 01:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:08.358 01:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.358 01:21:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:08.616 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:08.616 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1519036 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:08.874 nvmf hotplug test: fio failed as expected 00:11:08.874 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.132 rmmod nvme_tcp 00:11:09.132 rmmod nvme_fabrics 00:11:09.132 rmmod nvme_keyring 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1517007 ']' 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1517007 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1517007 ']' 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1517007 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.132 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1517007 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1517007' 00:11:09.393 killing process with pid 1517007 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1517007 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1517007 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.393 01:21:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.980 01:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.980 00:11:11.980 real 0m24.158s 00:11:11.980 user 1m26.015s 00:11:11.980 sys 0m6.611s 00:11:11.980 01:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.980 01:21:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.980 ************************************ 00:11:11.980 END TEST nvmf_fio_target 00:11:11.980 ************************************ 00:11:11.980 01:21:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:11.980 01:21:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:11.980 01:21:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.980 01:21:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.980 ************************************ 00:11:11.980 START TEST nvmf_bdevio 00:11:11.980 ************************************ 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:11.980 * Looking for test storage... 00:11:11.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.980 --rc genhtml_branch_coverage=1 00:11:11.980 --rc genhtml_function_coverage=1 00:11:11.980 --rc genhtml_legend=1 00:11:11.980 --rc geninfo_all_blocks=1 00:11:11.980 --rc geninfo_unexecuted_blocks=1 00:11:11.980 00:11:11.980 ' 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.980 --rc genhtml_branch_coverage=1 00:11:11.980 --rc genhtml_function_coverage=1 00:11:11.980 --rc genhtml_legend=1 00:11:11.980 --rc geninfo_all_blocks=1 00:11:11.980 --rc geninfo_unexecuted_blocks=1 00:11:11.980 00:11:11.980 ' 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.980 --rc genhtml_branch_coverage=1 00:11:11.980 --rc genhtml_function_coverage=1 00:11:11.980 --rc genhtml_legend=1 00:11:11.980 --rc geninfo_all_blocks=1 00:11:11.980 --rc geninfo_unexecuted_blocks=1 00:11:11.980 00:11:11.980 ' 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.980 --rc genhtml_branch_coverage=1 00:11:11.980 --rc genhtml_function_coverage=1 00:11:11.980 --rc genhtml_legend=1 00:11:11.980 --rc geninfo_all_blocks=1 00:11:11.980 --rc geninfo_unexecuted_blocks=1 00:11:11.980 00:11:11.980 ' 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.980 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.981 01:21:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:13.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:13.899 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:13.899 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:13.899 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:13.899 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:11:14.158 00:11:14.158 --- 10.0.0.2 ping statistics --- 00:11:14.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.158 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:11:14.158 00:11:14.158 --- 10.0.0.1 ping statistics --- 00:11:14.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.158 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1521894 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1521894 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1521894 ']' 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.158 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.158 [2024-10-13 01:21:59.570624] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:11:14.158 [2024-10-13 01:21:59.570705] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.158 [2024-10-13 01:21:59.637668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.158 [2024-10-13 01:21:59.691071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.158 [2024-10-13 01:21:59.691136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.158 [2024-10-13 01:21:59.691153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.158 [2024-10-13 01:21:59.691168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.158 [2024-10-13 01:21:59.691179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.158 [2024-10-13 01:21:59.692958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:14.158 [2024-10-13 01:21:59.693014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:14.158 [2024-10-13 01:21:59.693072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:14.158 [2024-10-13 01:21:59.693075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 [2024-10-13 01:21:59.865252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 Malloc0 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.417 [2024-10-13 01:21:59.938204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:14.417 { 00:11:14.417 "params": { 00:11:14.417 "name": "Nvme$subsystem", 00:11:14.417 "trtype": "$TEST_TRANSPORT", 00:11:14.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.417 "adrfam": "ipv4", 00:11:14.417 "trsvcid": "$NVMF_PORT", 00:11:14.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.417 "hdgst": ${hdgst:-false}, 00:11:14.417 "ddgst": ${ddgst:-false} 00:11:14.417 }, 00:11:14.417 "method": "bdev_nvme_attach_controller" 00:11:14.417 } 00:11:14.417 EOF 00:11:14.417 )") 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:11:14.417 01:21:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:14.417 "params": { 00:11:14.417 "name": "Nvme1", 00:11:14.417 "trtype": "tcp", 00:11:14.417 "traddr": "10.0.0.2", 00:11:14.417 "adrfam": "ipv4", 00:11:14.417 "trsvcid": "4420", 00:11:14.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.417 "hdgst": false, 00:11:14.417 "ddgst": false 00:11:14.417 }, 00:11:14.417 "method": "bdev_nvme_attach_controller" 00:11:14.417 }' 00:11:14.417 [2024-10-13 01:21:59.983346] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:11:14.417 [2024-10-13 01:21:59.983433] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521927 ] 00:11:14.675 [2024-10-13 01:22:00.045882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:14.675 [2024-10-13 01:22:00.097635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.675 [2024-10-13 01:22:00.097699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.675 [2024-10-13 01:22:00.097703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.933 I/O targets: 00:11:14.933 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:14.933 00:11:14.933 00:11:14.933 CUnit - A unit testing framework for C - Version 2.1-3 00:11:14.933 http://cunit.sourceforge.net/ 00:11:14.933 00:11:14.933 00:11:14.933 Suite: bdevio tests on: Nvme1n1 00:11:14.933 Test: blockdev write read block ...passed 00:11:14.933 Test: blockdev write zeroes read block ...passed 00:11:14.933 Test: blockdev write zeroes read no split ...passed 00:11:14.933 Test: blockdev write zeroes read split ...passed 00:11:14.933 Test: blockdev write zeroes read split partial ...passed 00:11:14.933 Test: blockdev reset ...[2024-10-13 01:22:00.433062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:14.933 [2024-10-13 01:22:00.433176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe1ac30 (9): Bad file descriptor 00:11:15.190 [2024-10-13 01:22:00.529986] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:15.190 passed 00:11:15.190 Test: blockdev write read 8 blocks ...passed 00:11:15.190 Test: blockdev write read size > 128k ...passed 00:11:15.190 Test: blockdev write read invalid size ...passed 00:11:15.190 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.190 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.190 Test: blockdev write read max offset ...passed 00:11:15.190 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.190 Test: blockdev writev readv 8 blocks ...passed 00:11:15.190 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.448 Test: blockdev writev readv block ...passed 00:11:15.448 Test: blockdev writev readv size > 128k ...passed 00:11:15.448 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.448 Test: blockdev comparev and writev ...[2024-10-13 01:22:00.865592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.448 [2024-10-13 01:22:00.865628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.865652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.448 [2024-10-13 01:22:00.865669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.865997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.448 [2024-10-13 01:22:00.866020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.866042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.448 [2024-10-13 01:22:00.866057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.866373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.448 [2024-10-13 01:22:00.866397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.866418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.448 [2024-10-13 01:22:00.866434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.866768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.448 [2024-10-13 01:22:00.866791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.866812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.448 [2024-10-13 01:22:00.866827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:15.448 passed 00:11:15.448 Test: blockdev nvme passthru rw ...passed 00:11:15.448 Test: blockdev nvme passthru vendor specific ...[2024-10-13 01:22:00.948732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.448 [2024-10-13 01:22:00.948760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.948911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.448 [2024-10-13 01:22:00.948934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.949080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.448 [2024-10-13 01:22:00.949102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:15.448 [2024-10-13 01:22:00.949245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.448 [2024-10-13 01:22:00.949268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:15.448 passed 00:11:15.448 Test: blockdev nvme admin passthru ...passed 00:11:15.448 Test: blockdev copy ...passed 00:11:15.448 00:11:15.448 Run Summary: Type Total Ran Passed Failed Inactive 00:11:15.448 suites 1 1 n/a 0 0 00:11:15.448 tests 23 23 23 0 0 00:11:15.448 asserts 152 152 152 0 n/a 00:11:15.448 00:11:15.448 Elapsed time = 1.366 seconds 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.706 rmmod nvme_tcp 00:11:15.706 rmmod nvme_fabrics 00:11:15.706 rmmod nvme_keyring 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1521894 ']' 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1521894 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1521894 ']' 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1521894 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1521894 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1521894' 00:11:15.706 killing process with pid 1521894 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1521894 00:11:15.706 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1521894 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.965 01:22:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.493 00:11:18.493 real 0m6.552s 00:11:18.493 user 0m10.355s 00:11:18.493 sys 0m2.204s 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.493 ************************************ 00:11:18.493 END TEST nvmf_bdevio 00:11:18.493 ************************************ 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:18.493 00:11:18.493 real 3m55.411s 00:11:18.493 user 10m15.274s 00:11:18.493 sys 1m7.075s 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.493 ************************************ 00:11:18.493 END TEST nvmf_target_core 00:11:18.493 ************************************ 00:11:18.493 01:22:03 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:18.493 01:22:03 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.493 01:22:03 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.493 01:22:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:18.493 ************************************ 00:11:18.493 START TEST nvmf_target_extra 00:11:18.493 ************************************ 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:18.493 * Looking for test storage... 00:11:18.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:18.493 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.494 --rc genhtml_branch_coverage=1 00:11:18.494 --rc genhtml_function_coverage=1 00:11:18.494 --rc genhtml_legend=1 00:11:18.494 --rc geninfo_all_blocks=1 00:11:18.494 --rc geninfo_unexecuted_blocks=1 00:11:18.494 00:11:18.494 ' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.494 --rc genhtml_branch_coverage=1 00:11:18.494 --rc genhtml_function_coverage=1 00:11:18.494 --rc genhtml_legend=1 00:11:18.494 --rc geninfo_all_blocks=1 00:11:18.494 --rc geninfo_unexecuted_blocks=1 00:11:18.494 00:11:18.494 ' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.494 --rc genhtml_branch_coverage=1 00:11:18.494 --rc genhtml_function_coverage=1 00:11:18.494 --rc genhtml_legend=1 00:11:18.494 --rc geninfo_all_blocks=1 00:11:18.494 --rc geninfo_unexecuted_blocks=1 00:11:18.494 00:11:18.494 ' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:18.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.494 --rc genhtml_branch_coverage=1 00:11:18.494 --rc genhtml_function_coverage=1 00:11:18.494 --rc genhtml_legend=1 00:11:18.494 --rc geninfo_all_blocks=1 00:11:18.494 --rc geninfo_unexecuted_blocks=1 00:11:18.494 00:11:18.494 ' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.494 ************************************ 00:11:18.494 START TEST nvmf_example 00:11:18.494 ************************************ 00:11:18.494 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:18.495 * Looking for test storage... 00:11:18.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:18.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.495 --rc genhtml_branch_coverage=1 00:11:18.495 --rc genhtml_function_coverage=1 00:11:18.495 --rc genhtml_legend=1 00:11:18.495 --rc geninfo_all_blocks=1 00:11:18.495 --rc geninfo_unexecuted_blocks=1 00:11:18.495 00:11:18.495 ' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:18.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.495 --rc genhtml_branch_coverage=1 00:11:18.495 --rc genhtml_function_coverage=1 00:11:18.495 --rc genhtml_legend=1 00:11:18.495 --rc geninfo_all_blocks=1 00:11:18.495 --rc geninfo_unexecuted_blocks=1 00:11:18.495 00:11:18.495 ' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:18.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.495 --rc genhtml_branch_coverage=1 00:11:18.495 --rc genhtml_function_coverage=1 00:11:18.495 --rc genhtml_legend=1 00:11:18.495 --rc geninfo_all_blocks=1 00:11:18.495 --rc geninfo_unexecuted_blocks=1 00:11:18.495 00:11:18.495 ' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:18.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.495 --rc genhtml_branch_coverage=1 00:11:18.495 --rc genhtml_function_coverage=1 00:11:18.495 --rc genhtml_legend=1 00:11:18.495 --rc geninfo_all_blocks=1 00:11:18.495 --rc geninfo_unexecuted_blocks=1 00:11:18.495 00:11:18.495 ' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:18.495 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.496 01:22:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.026 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:21.027 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:21.027 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:21.027 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:21.027 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:11:21.027 00:11:21.027 --- 10.0.0.2 ping statistics --- 00:11:21.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.027 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:11:21.027 00:11:21.027 --- 10.0.0.1 ping statistics --- 00:11:21.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.027 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1524184 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1524184 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1524184 ']' 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.027 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.028 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.285 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.285 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:21.285 01:22:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:31.250 Initializing NVMe Controllers 00:11:31.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:31.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:31.250 Initialization complete. Launching workers. 00:11:31.250 ======================================================== 00:11:31.250 Latency(us) 00:11:31.250 Device Information : IOPS MiB/s Average min max 00:11:31.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14591.72 57.00 4386.63 894.48 19084.03 00:11:31.250 ======================================================== 00:11:31.250 Total : 14591.72 57.00 4386.63 894.48 19084.03 00:11:31.250 00:11:31.250 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:31.250 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:31.250 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:31.250 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:31.250 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.250 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:31.250 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.250 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.250 rmmod nvme_tcp 00:11:31.250 rmmod nvme_fabrics 00:11:31.250 rmmod nvme_keyring 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1524184 ']' 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1524184 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1524184 ']' 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1524184 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1524184 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1524184' 00:11:31.508 killing process with pid 1524184 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1524184 00:11:31.508 01:22:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1524184 00:11:31.767 nvmf threads initialize successfully 00:11:31.767 bdev subsystem init successfully 00:11:31.767 created a nvmf target service 00:11:31.767 create targets's poll groups done 00:11:31.767 all subsystems of target started 00:11:31.767 nvmf target is running 00:11:31.767 all subsystems of target stopped 00:11:31.767 destroy targets's poll groups done 00:11:31.767 destroyed the nvmf target service 00:11:31.767 bdev subsystem finish successfully 00:11:31.767 nvmf threads destroy successfully 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.767 01:22:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.668 00:11:33.668 real 0m15.394s 00:11:33.668 user 0m42.444s 00:11:33.668 sys 0m3.242s 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:33.668 ************************************ 00:11:33.668 END TEST nvmf_example 00:11:33.668 ************************************ 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:33.668 ************************************ 00:11:33.668 START TEST nvmf_filesystem 00:11:33.668 ************************************ 00:11:33.668 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:33.928 * Looking for test storage... 00:11:33.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:33.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.928 --rc genhtml_branch_coverage=1 00:11:33.928 --rc genhtml_function_coverage=1 00:11:33.928 --rc genhtml_legend=1 00:11:33.928 --rc geninfo_all_blocks=1 00:11:33.928 --rc geninfo_unexecuted_blocks=1 00:11:33.928 00:11:33.928 ' 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:33.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.928 --rc genhtml_branch_coverage=1 00:11:33.928 --rc genhtml_function_coverage=1 00:11:33.928 --rc genhtml_legend=1 00:11:33.928 --rc geninfo_all_blocks=1 00:11:33.928 --rc geninfo_unexecuted_blocks=1 00:11:33.928 00:11:33.928 ' 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:33.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.928 --rc genhtml_branch_coverage=1 00:11:33.928 --rc genhtml_function_coverage=1 00:11:33.928 --rc genhtml_legend=1 00:11:33.928 --rc geninfo_all_blocks=1 00:11:33.928 --rc geninfo_unexecuted_blocks=1 00:11:33.928 00:11:33.928 ' 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:33.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.928 --rc genhtml_branch_coverage=1 00:11:33.928 --rc genhtml_function_coverage=1 00:11:33.928 --rc genhtml_legend=1 00:11:33.928 --rc geninfo_all_blocks=1 00:11:33.928 --rc geninfo_unexecuted_blocks=1 00:11:33.928 00:11:33.928 ' 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:33.928 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:33.929 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:33.929 #define SPDK_CONFIG_H 00:11:33.929 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:33.929 #define SPDK_CONFIG_APPS 1 00:11:33.929 #define SPDK_CONFIG_ARCH native 00:11:33.929 #undef SPDK_CONFIG_ASAN 00:11:33.929 #undef SPDK_CONFIG_AVAHI 00:11:33.929 #undef SPDK_CONFIG_CET 00:11:33.929 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:33.929 #define SPDK_CONFIG_COVERAGE 1 00:11:33.929 #define SPDK_CONFIG_CROSS_PREFIX 00:11:33.929 #undef SPDK_CONFIG_CRYPTO 00:11:33.929 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:33.929 #undef SPDK_CONFIG_CUSTOMOCF 00:11:33.929 #undef SPDK_CONFIG_DAOS 00:11:33.929 #define SPDK_CONFIG_DAOS_DIR 00:11:33.929 #define SPDK_CONFIG_DEBUG 1 00:11:33.929 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:33.929 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:33.929 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:33.929 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:33.929 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:33.929 #undef SPDK_CONFIG_DPDK_UADK 00:11:33.929 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:33.929 #define SPDK_CONFIG_EXAMPLES 1 00:11:33.929 #undef SPDK_CONFIG_FC 00:11:33.929 #define SPDK_CONFIG_FC_PATH 00:11:33.929 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:33.929 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:33.929 #define SPDK_CONFIG_FSDEV 1 00:11:33.929 #undef SPDK_CONFIG_FUSE 00:11:33.929 #undef SPDK_CONFIG_FUZZER 00:11:33.929 #define SPDK_CONFIG_FUZZER_LIB 00:11:33.929 #undef SPDK_CONFIG_GOLANG 00:11:33.929 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:33.929 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:33.929 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:33.929 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:33.929 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:33.929 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:33.929 #undef SPDK_CONFIG_HAVE_LZ4 00:11:33.929 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:33.929 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:33.929 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:33.929 #define SPDK_CONFIG_IDXD 1 00:11:33.929 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:33.930 #undef SPDK_CONFIG_IPSEC_MB 00:11:33.930 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:33.930 #define SPDK_CONFIG_ISAL 1 00:11:33.930 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:33.930 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:33.930 #define SPDK_CONFIG_LIBDIR 00:11:33.930 #undef SPDK_CONFIG_LTO 00:11:33.930 #define SPDK_CONFIG_MAX_LCORES 128 00:11:33.930 #define SPDK_CONFIG_NVME_CUSE 1 00:11:33.930 #undef SPDK_CONFIG_OCF 00:11:33.930 #define SPDK_CONFIG_OCF_PATH 00:11:33.930 #define SPDK_CONFIG_OPENSSL_PATH 00:11:33.930 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:33.930 #define SPDK_CONFIG_PGO_DIR 00:11:33.930 #undef SPDK_CONFIG_PGO_USE 00:11:33.930 #define SPDK_CONFIG_PREFIX /usr/local 00:11:33.930 #undef SPDK_CONFIG_RAID5F 00:11:33.930 #undef SPDK_CONFIG_RBD 00:11:33.930 #define SPDK_CONFIG_RDMA 1 00:11:33.930 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:33.930 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:33.930 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:33.930 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:33.930 #define SPDK_CONFIG_SHARED 1 00:11:33.930 #undef SPDK_CONFIG_SMA 00:11:33.930 #define SPDK_CONFIG_TESTS 1 00:11:33.930 #undef SPDK_CONFIG_TSAN 00:11:33.930 #define SPDK_CONFIG_UBLK 1 00:11:33.930 #define SPDK_CONFIG_UBSAN 1 00:11:33.930 #undef SPDK_CONFIG_UNIT_TESTS 00:11:33.930 #undef SPDK_CONFIG_URING 00:11:33.930 #define SPDK_CONFIG_URING_PATH 00:11:33.930 #undef SPDK_CONFIG_URING_ZNS 00:11:33.930 #undef SPDK_CONFIG_USDT 00:11:33.930 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:33.930 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:33.930 #define SPDK_CONFIG_VFIO_USER 1 00:11:33.930 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:33.930 #define SPDK_CONFIG_VHOST 1 00:11:33.930 #define SPDK_CONFIG_VIRTIO 1 00:11:33.930 #undef SPDK_CONFIG_VTUNE 00:11:33.930 #define SPDK_CONFIG_VTUNE_DIR 00:11:33.930 #define SPDK_CONFIG_WERROR 1 00:11:33.930 #define SPDK_CONFIG_WPDK_DIR 00:11:33.930 #undef SPDK_CONFIG_XNVME 00:11:33.930 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:33.930 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:33.931 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1525754 ]] 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1525754 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.QBetD3 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QBetD3/tests/target /tmp/spdk.QBetD3 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:33.932 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=53848371200 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988511744 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=8140140544 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982889472 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994255872 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375269376 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397703168 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22433792 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30994026496 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994255872 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=229376 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:33.933 * Looking for test storage... 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=53848371200 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=10354733056 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:33.933 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:34.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.191 --rc genhtml_branch_coverage=1 00:11:34.191 --rc genhtml_function_coverage=1 00:11:34.191 --rc genhtml_legend=1 00:11:34.191 --rc geninfo_all_blocks=1 00:11:34.191 --rc geninfo_unexecuted_blocks=1 00:11:34.191 00:11:34.191 ' 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:34.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.191 --rc genhtml_branch_coverage=1 00:11:34.191 --rc genhtml_function_coverage=1 00:11:34.191 --rc genhtml_legend=1 00:11:34.191 --rc geninfo_all_blocks=1 00:11:34.191 --rc geninfo_unexecuted_blocks=1 00:11:34.191 00:11:34.191 ' 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:34.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.191 --rc genhtml_branch_coverage=1 00:11:34.191 --rc genhtml_function_coverage=1 00:11:34.191 --rc genhtml_legend=1 00:11:34.191 --rc geninfo_all_blocks=1 00:11:34.191 --rc geninfo_unexecuted_blocks=1 00:11:34.191 00:11:34.191 ' 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:34.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.191 --rc genhtml_branch_coverage=1 00:11:34.191 --rc genhtml_function_coverage=1 00:11:34.191 --rc genhtml_legend=1 00:11:34.191 --rc geninfo_all_blocks=1 00:11:34.191 --rc geninfo_unexecuted_blocks=1 00:11:34.191 00:11:34.191 ' 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.191 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.192 01:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:36.093 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:36.094 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:36.094 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:36.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:36.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:36.094 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:36.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:36.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:11:36.353 00:11:36.353 --- 10.0.0.2 ping statistics --- 00:11:36.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.353 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:36.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:36.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:11:36.353 00:11:36.353 --- 10.0.0.1 ping statistics --- 00:11:36.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:36.353 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.353 ************************************ 00:11:36.353 START TEST nvmf_filesystem_no_in_capsule 00:11:36.353 ************************************ 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1527511 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1527511 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1527511 ']' 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:36.353 01:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.353 [2024-10-13 01:22:21.884989] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:11:36.353 [2024-10-13 01:22:21.885080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:36.612 [2024-10-13 01:22:21.950034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.612 [2024-10-13 01:22:21.998940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.612 [2024-10-13 01:22:21.999006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.612 [2024-10-13 01:22:21.999022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.612 [2024-10-13 01:22:21.999035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.612 [2024-10-13 01:22:21.999047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.612 [2024-10-13 01:22:22.000821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.612 [2024-10-13 01:22:22.000897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.612 [2024-10-13 01:22:22.000987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.612 [2024-10-13 01:22:22.000990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.612 [2024-10-13 01:22:22.146777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.612 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.870 Malloc1 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.870 [2024-10-13 01:22:22.338669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.870 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:36.870 { 00:11:36.870 "name": "Malloc1", 00:11:36.870 "aliases": [ 00:11:36.870 "71082865-18d3-43cf-b61d-1ce02ca417d2" 00:11:36.870 ], 00:11:36.870 "product_name": "Malloc disk", 00:11:36.870 "block_size": 512, 00:11:36.870 "num_blocks": 1048576, 00:11:36.870 "uuid": "71082865-18d3-43cf-b61d-1ce02ca417d2", 00:11:36.870 "assigned_rate_limits": { 00:11:36.870 "rw_ios_per_sec": 0, 00:11:36.870 "rw_mbytes_per_sec": 0, 00:11:36.870 "r_mbytes_per_sec": 0, 00:11:36.870 "w_mbytes_per_sec": 0 00:11:36.870 }, 00:11:36.870 "claimed": true, 00:11:36.870 "claim_type": "exclusive_write", 00:11:36.870 "zoned": false, 00:11:36.871 "supported_io_types": { 00:11:36.871 "read": true, 00:11:36.871 "write": true, 00:11:36.871 "unmap": true, 00:11:36.871 "flush": true, 00:11:36.871 "reset": true, 00:11:36.871 "nvme_admin": false, 00:11:36.871 "nvme_io": false, 00:11:36.871 "nvme_io_md": false, 00:11:36.871 "write_zeroes": true, 00:11:36.871 "zcopy": true, 00:11:36.871 "get_zone_info": false, 00:11:36.871 "zone_management": false, 00:11:36.871 "zone_append": false, 00:11:36.871 "compare": false, 00:11:36.871 "compare_and_write": false, 00:11:36.871 "abort": true, 00:11:36.871 "seek_hole": false, 00:11:36.871 "seek_data": false, 00:11:36.871 "copy": true, 00:11:36.871 "nvme_iov_md": false 00:11:36.871 }, 00:11:36.871 "memory_domains": [ 00:11:36.871 { 00:11:36.871 "dma_device_id": "system", 00:11:36.871 "dma_device_type": 1 00:11:36.871 }, 00:11:36.871 { 00:11:36.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.871 "dma_device_type": 2 00:11:36.871 } 00:11:36.871 ], 00:11:36.871 "driver_specific": {} 00:11:36.871 } 00:11:36.871 ]' 00:11:36.871 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:36.871 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:36.871 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:36.871 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:36.871 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:36.871 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:36.871 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:36.871 01:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.803 01:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.803 01:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:37.803 01:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.803 01:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:37.803 01:22:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:39.701 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:39.959 01:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:40.892 01:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.825 ************************************ 00:11:41.825 START TEST filesystem_ext4 00:11:41.825 ************************************ 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:41.825 01:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:41.825 mke2fs 1.47.0 (5-Feb-2023) 00:11:41.825 Discarding device blocks: 0/522240 done 00:11:42.083 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:42.083 Filesystem UUID: 9f79b1e7-0ffc-4d34-a6cd-62f79dbffd4b 00:11:42.083 Superblock backups stored on blocks: 00:11:42.083 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:42.083 00:11:42.083 Allocating group tables: 0/64 done 00:11:42.083 Writing inode tables: 0/64 done 00:11:44.867 Creating journal (8192 blocks): done 00:11:47.142 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:11:47.142 00:11:47.142 01:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:47.142 01:22:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1527511 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.695 00:11:53.695 real 0m11.331s 00:11:53.695 user 0m0.026s 00:11:53.695 sys 0m0.055s 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:53.695 ************************************ 00:11:53.695 END TEST filesystem_ext4 00:11:53.695 ************************************ 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.695 ************************************ 00:11:53.695 START TEST filesystem_btrfs 00:11:53.695 ************************************ 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:53.695 btrfs-progs v6.8.1 00:11:53.695 See https://btrfs.readthedocs.io for more information. 00:11:53.695 00:11:53.695 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:53.695 NOTE: several default settings have changed in version 5.15, please make sure 00:11:53.695 this does not affect your deployments: 00:11:53.695 - DUP for metadata (-m dup) 00:11:53.695 - enabled no-holes (-O no-holes) 00:11:53.695 - enabled free-space-tree (-R free-space-tree) 00:11:53.695 00:11:53.695 Label: (null) 00:11:53.695 UUID: 74ec4907-1004-4d58-937b-09bc997cbe09 00:11:53.695 Node size: 16384 00:11:53.695 Sector size: 4096 (CPU page size: 4096) 00:11:53.695 Filesystem size: 510.00MiB 00:11:53.695 Block group profiles: 00:11:53.695 Data: single 8.00MiB 00:11:53.695 Metadata: DUP 32.00MiB 00:11:53.695 System: DUP 8.00MiB 00:11:53.695 SSD detected: yes 00:11:53.695 Zoned device: no 00:11:53.695 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:53.695 Checksum: crc32c 00:11:53.695 Number of devices: 1 00:11:53.695 Devices: 00:11:53.695 ID SIZE PATH 00:11:53.695 1 510.00MiB /dev/nvme0n1p1 00:11:53.695 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:53.695 01:22:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.261 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.261 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:54.261 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.261 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1527511 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.519 00:11:54.519 real 0m1.244s 00:11:54.519 user 0m0.012s 00:11:54.519 sys 0m0.101s 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:54.519 ************************************ 00:11:54.519 END TEST filesystem_btrfs 00:11:54.519 ************************************ 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.519 ************************************ 00:11:54.519 START TEST filesystem_xfs 00:11:54.519 ************************************ 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:54.519 01:22:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:54.777 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:54.777 = sectsz=512 attr=2, projid32bit=1 00:11:54.777 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:54.777 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:54.777 data = bsize=4096 blocks=130560, imaxpct=25 00:11:54.777 = sunit=0 swidth=0 blks 00:11:54.777 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:54.777 log =internal log bsize=4096 blocks=16384, version=2 00:11:54.777 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:54.777 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:55.710 Discarding blocks...Done. 00:11:55.710 01:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:55.710 01:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1527511 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.607 00:11:57.607 real 0m2.904s 00:11:57.607 user 0m0.014s 00:11:57.607 sys 0m0.066s 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:57.607 ************************************ 00:11:57.607 END TEST filesystem_xfs 00:11:57.607 ************************************ 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:57.607 01:22:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1527511 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1527511 ']' 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1527511 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527511 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.607 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.608 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527511' 00:11:57.608 killing process with pid 1527511 00:11:57.608 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1527511 00:11:57.608 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1527511 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:58.174 00:11:58.174 real 0m21.730s 00:11:58.174 user 1m24.601s 00:11:58.174 sys 0m2.312s 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.174 ************************************ 00:11:58.174 END TEST nvmf_filesystem_no_in_capsule 00:11:58.174 ************************************ 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.174 ************************************ 00:11:58.174 START TEST nvmf_filesystem_in_capsule 00:11:58.174 ************************************ 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1530286 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1530286 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1530286 ']' 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.174 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.174 [2024-10-13 01:22:43.672543] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:11:58.174 [2024-10-13 01:22:43.672626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.174 [2024-10-13 01:22:43.745798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.432 [2024-10-13 01:22:43.796091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.432 [2024-10-13 01:22:43.796151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.432 [2024-10-13 01:22:43.796168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.432 [2024-10-13 01:22:43.796180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.432 [2024-10-13 01:22:43.796193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.432 [2024-10-13 01:22:43.797789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.432 [2024-10-13 01:22:43.797848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.432 [2024-10-13 01:22:43.797968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.432 [2024-10-13 01:22:43.797971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.432 [2024-10-13 01:22:43.960490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.432 01:22:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.691 Malloc1 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.691 [2024-10-13 01:22:44.145184] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:58.691 { 00:11:58.691 "name": "Malloc1", 00:11:58.691 "aliases": [ 00:11:58.691 "9fca3f94-f614-4149-a773-e46de320c139" 00:11:58.691 ], 00:11:58.691 "product_name": "Malloc disk", 00:11:58.691 "block_size": 512, 00:11:58.691 "num_blocks": 1048576, 00:11:58.691 "uuid": "9fca3f94-f614-4149-a773-e46de320c139", 00:11:58.691 "assigned_rate_limits": { 00:11:58.691 "rw_ios_per_sec": 0, 00:11:58.691 "rw_mbytes_per_sec": 0, 00:11:58.691 "r_mbytes_per_sec": 0, 00:11:58.691 "w_mbytes_per_sec": 0 00:11:58.691 }, 00:11:58.691 "claimed": true, 00:11:58.691 "claim_type": "exclusive_write", 00:11:58.691 "zoned": false, 00:11:58.691 "supported_io_types": { 00:11:58.691 "read": true, 00:11:58.691 "write": true, 00:11:58.691 "unmap": true, 00:11:58.691 "flush": true, 00:11:58.691 "reset": true, 00:11:58.691 "nvme_admin": false, 00:11:58.691 "nvme_io": false, 00:11:58.691 "nvme_io_md": false, 00:11:58.691 "write_zeroes": true, 00:11:58.691 "zcopy": true, 00:11:58.691 "get_zone_info": false, 00:11:58.691 "zone_management": false, 00:11:58.691 "zone_append": false, 00:11:58.691 "compare": false, 00:11:58.691 "compare_and_write": false, 00:11:58.691 "abort": true, 00:11:58.691 "seek_hole": false, 00:11:58.691 "seek_data": false, 00:11:58.691 "copy": true, 00:11:58.691 "nvme_iov_md": false 00:11:58.691 }, 00:11:58.691 "memory_domains": [ 00:11:58.691 { 00:11:58.691 "dma_device_id": "system", 00:11:58.691 "dma_device_type": 1 00:11:58.691 }, 00:11:58.691 { 00:11:58.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.691 "dma_device_type": 2 00:11:58.691 } 00:11:58.691 ], 00:11:58.691 "driver_specific": {} 00:11:58.691 } 00:11:58.691 ]' 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:58.691 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.624 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.624 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.624 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.624 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:59.624 01:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:01.523 01:22:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:01.781 01:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:02.039 01:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.971 ************************************ 00:12:02.971 START TEST filesystem_in_capsule_ext4 00:12:02.971 ************************************ 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:02.971 01:22:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:02.971 mke2fs 1.47.0 (5-Feb-2023) 00:12:03.229 Discarding device blocks: 0/522240 done 00:12:03.229 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:03.229 Filesystem UUID: 8c912bcc-62f9-41bd-b049-604dc75abef7 00:12:03.229 Superblock backups stored on blocks: 00:12:03.229 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:03.229 00:12:03.229 Allocating group tables: 0/64 done 00:12:03.229 Writing inode tables: 0/64 done 00:12:03.229 Creating journal (8192 blocks): done 00:12:05.116 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:12:05.116 00:12:05.116 01:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:05.116 01:22:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1530286 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.670 00:12:11.670 real 0m8.060s 00:12:11.670 user 0m0.030s 00:12:11.670 sys 0m0.057s 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:11.670 ************************************ 00:12:11.670 END TEST filesystem_in_capsule_ext4 00:12:11.670 ************************************ 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.670 ************************************ 00:12:11.670 START TEST filesystem_in_capsule_btrfs 00:12:11.670 ************************************ 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:11.670 btrfs-progs v6.8.1 00:12:11.670 See https://btrfs.readthedocs.io for more information. 00:12:11.670 00:12:11.670 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:11.670 NOTE: several default settings have changed in version 5.15, please make sure 00:12:11.670 this does not affect your deployments: 00:12:11.670 - DUP for metadata (-m dup) 00:12:11.670 - enabled no-holes (-O no-holes) 00:12:11.670 - enabled free-space-tree (-R free-space-tree) 00:12:11.670 00:12:11.670 Label: (null) 00:12:11.670 UUID: 148cb042-fc89-4d35-aafd-c36fe0bfab7e 00:12:11.670 Node size: 16384 00:12:11.670 Sector size: 4096 (CPU page size: 4096) 00:12:11.670 Filesystem size: 510.00MiB 00:12:11.670 Block group profiles: 00:12:11.670 Data: single 8.00MiB 00:12:11.670 Metadata: DUP 32.00MiB 00:12:11.670 System: DUP 8.00MiB 00:12:11.670 SSD detected: yes 00:12:11.670 Zoned device: no 00:12:11.670 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:11.670 Checksum: crc32c 00:12:11.670 Number of devices: 1 00:12:11.670 Devices: 00:12:11.670 ID SIZE PATH 00:12:11.670 1 510.00MiB /dev/nvme0n1p1 00:12:11.670 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:11.670 01:22:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1530286 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.670 00:12:11.670 real 0m0.646s 00:12:11.670 user 0m0.026s 00:12:11.670 sys 0m0.090s 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:11.670 ************************************ 00:12:11.670 END TEST filesystem_in_capsule_btrfs 00:12:11.670 ************************************ 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.670 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.928 ************************************ 00:12:11.928 START TEST filesystem_in_capsule_xfs 00:12:11.928 ************************************ 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:11.928 01:22:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:11.928 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:11.928 = sectsz=512 attr=2, projid32bit=1 00:12:11.928 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:11.928 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:11.928 data = bsize=4096 blocks=130560, imaxpct=25 00:12:11.928 = sunit=0 swidth=0 blks 00:12:11.928 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:11.928 log =internal log bsize=4096 blocks=16384, version=2 00:12:11.928 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:11.928 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:12.859 Discarding blocks...Done. 00:12:12.859 01:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:12.859 01:22:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1530286 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:14.756 00:12:14.756 real 0m2.721s 00:12:14.756 user 0m0.013s 00:12:14.756 sys 0m0.064s 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.756 01:22:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:14.756 ************************************ 00:12:14.756 END TEST filesystem_in_capsule_xfs 00:12:14.756 ************************************ 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1530286 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1530286 ']' 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1530286 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1530286 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1530286' 00:12:14.756 killing process with pid 1530286 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1530286 00:12:14.756 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1530286 00:12:15.395 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:15.395 00:12:15.395 real 0m17.037s 00:12:15.395 user 1m6.140s 00:12:15.395 sys 0m2.061s 00:12:15.395 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.395 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.395 ************************************ 00:12:15.395 END TEST nvmf_filesystem_in_capsule 00:12:15.395 ************************************ 00:12:15.395 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:15.395 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:15.395 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:15.395 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.396 rmmod nvme_tcp 00:12:15.396 rmmod nvme_fabrics 00:12:15.396 rmmod nvme_keyring 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.396 01:23:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.301 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.301 00:12:17.301 real 0m43.537s 00:12:17.301 user 2m31.814s 00:12:17.301 sys 0m6.091s 00:12:17.301 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.301 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.301 ************************************ 00:12:17.301 END TEST nvmf_filesystem 00:12:17.301 ************************************ 00:12:17.301 01:23:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:17.301 01:23:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:17.302 01:23:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:17.302 01:23:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.302 ************************************ 00:12:17.302 START TEST nvmf_target_discovery 00:12:17.302 ************************************ 00:12:17.302 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:17.302 * Looking for test storage... 00:12:17.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.560 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:17.560 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:17.560 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:17.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.561 --rc genhtml_branch_coverage=1 00:12:17.561 --rc genhtml_function_coverage=1 00:12:17.561 --rc genhtml_legend=1 00:12:17.561 --rc geninfo_all_blocks=1 00:12:17.561 --rc geninfo_unexecuted_blocks=1 00:12:17.561 00:12:17.561 ' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:17.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.561 --rc genhtml_branch_coverage=1 00:12:17.561 --rc genhtml_function_coverage=1 00:12:17.561 --rc genhtml_legend=1 00:12:17.561 --rc geninfo_all_blocks=1 00:12:17.561 --rc geninfo_unexecuted_blocks=1 00:12:17.561 00:12:17.561 ' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:17.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.561 --rc genhtml_branch_coverage=1 00:12:17.561 --rc genhtml_function_coverage=1 00:12:17.561 --rc genhtml_legend=1 00:12:17.561 --rc geninfo_all_blocks=1 00:12:17.561 --rc geninfo_unexecuted_blocks=1 00:12:17.561 00:12:17.561 ' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:17.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.561 --rc genhtml_branch_coverage=1 00:12:17.561 --rc genhtml_function_coverage=1 00:12:17.561 --rc genhtml_legend=1 00:12:17.561 --rc geninfo_all_blocks=1 00:12:17.561 --rc geninfo_unexecuted_blocks=1 00:12:17.561 00:12:17.561 ' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.561 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:17.562 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:17.562 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:17.562 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.562 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.562 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.562 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:17.562 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:17.562 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.562 01:23:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:20.094 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:20.094 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:20.094 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:20.094 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:20.094 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:12:20.095 00:12:20.095 --- 10.0.0.2 ping statistics --- 00:12:20.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.095 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:12:20.095 00:12:20.095 --- 10.0.0.1 ping statistics --- 00:12:20.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.095 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1534445 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1534445 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1534445 ']' 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 [2024-10-13 01:23:05.299953] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:12:20.095 [2024-10-13 01:23:05.300053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.095 [2024-10-13 01:23:05.378899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.095 [2024-10-13 01:23:05.432121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.095 [2024-10-13 01:23:05.432177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.095 [2024-10-13 01:23:05.432194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.095 [2024-10-13 01:23:05.432208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.095 [2024-10-13 01:23:05.432219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.095 [2024-10-13 01:23:05.435495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.095 [2024-10-13 01:23:05.435535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.095 [2024-10-13 01:23:05.435596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.095 [2024-10-13 01:23:05.435599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 [2024-10-13 01:23:05.599491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 Null1 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 [2024-10-13 01:23:05.639886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 Null2 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.095 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:20.096 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.096 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.096 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.096 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:20.096 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.096 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 Null3 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 Null4 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:20.354 00:12:20.354 Discovery Log Number of Records 6, Generation counter 6 00:12:20.354 =====Discovery Log Entry 0====== 00:12:20.354 trtype: tcp 00:12:20.354 adrfam: ipv4 00:12:20.354 subtype: current discovery subsystem 00:12:20.354 treq: not required 00:12:20.354 portid: 0 00:12:20.354 trsvcid: 4420 00:12:20.354 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:20.354 traddr: 10.0.0.2 00:12:20.354 eflags: explicit discovery connections, duplicate discovery information 00:12:20.354 sectype: none 00:12:20.354 =====Discovery Log Entry 1====== 00:12:20.354 trtype: tcp 00:12:20.354 adrfam: ipv4 00:12:20.354 subtype: nvme subsystem 00:12:20.354 treq: not required 00:12:20.354 portid: 0 00:12:20.354 trsvcid: 4420 00:12:20.354 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:20.354 traddr: 10.0.0.2 00:12:20.354 eflags: none 00:12:20.354 sectype: none 00:12:20.354 =====Discovery Log Entry 2====== 00:12:20.354 trtype: tcp 00:12:20.354 adrfam: ipv4 00:12:20.354 subtype: nvme subsystem 00:12:20.354 treq: not required 00:12:20.354 portid: 0 00:12:20.354 trsvcid: 4420 00:12:20.354 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:20.354 traddr: 10.0.0.2 00:12:20.354 eflags: none 00:12:20.354 sectype: none 00:12:20.354 =====Discovery Log Entry 3====== 00:12:20.354 trtype: tcp 00:12:20.354 adrfam: ipv4 00:12:20.354 subtype: nvme subsystem 00:12:20.354 treq: not required 00:12:20.354 portid: 0 00:12:20.354 trsvcid: 4420 00:12:20.354 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:20.354 traddr: 10.0.0.2 00:12:20.354 eflags: none 00:12:20.354 sectype: none 00:12:20.354 =====Discovery Log Entry 4====== 00:12:20.354 trtype: tcp 00:12:20.354 adrfam: ipv4 00:12:20.354 subtype: nvme subsystem 00:12:20.354 treq: not required 00:12:20.354 portid: 0 00:12:20.354 trsvcid: 4420 00:12:20.354 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:20.354 traddr: 10.0.0.2 00:12:20.354 eflags: none 00:12:20.354 sectype: none 00:12:20.354 =====Discovery Log Entry 5====== 00:12:20.354 trtype: tcp 00:12:20.354 adrfam: ipv4 00:12:20.354 subtype: discovery subsystem referral 00:12:20.354 treq: not required 00:12:20.354 portid: 0 00:12:20.354 trsvcid: 4430 00:12:20.354 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:20.354 traddr: 10.0.0.2 00:12:20.354 eflags: none 00:12:20.354 sectype: none 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:20.354 Perform nvmf subsystem discovery via RPC 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.354 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.612 [ 00:12:20.612 { 00:12:20.612 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.612 "subtype": "Discovery", 00:12:20.612 "listen_addresses": [ 00:12:20.612 { 00:12:20.612 "trtype": "TCP", 00:12:20.612 "adrfam": "IPv4", 00:12:20.612 "traddr": "10.0.0.2", 00:12:20.612 "trsvcid": "4420" 00:12:20.612 } 00:12:20.612 ], 00:12:20.612 "allow_any_host": true, 00:12:20.612 "hosts": [] 00:12:20.612 }, 00:12:20.612 { 00:12:20.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:20.612 "subtype": "NVMe", 00:12:20.612 "listen_addresses": [ 00:12:20.612 { 00:12:20.612 "trtype": "TCP", 00:12:20.612 "adrfam": "IPv4", 00:12:20.612 "traddr": "10.0.0.2", 00:12:20.612 "trsvcid": "4420" 00:12:20.612 } 00:12:20.612 ], 00:12:20.612 "allow_any_host": true, 00:12:20.612 "hosts": [], 00:12:20.612 "serial_number": "SPDK00000000000001", 00:12:20.612 "model_number": "SPDK bdev Controller", 00:12:20.612 "max_namespaces": 32, 00:12:20.612 "min_cntlid": 1, 00:12:20.613 "max_cntlid": 65519, 00:12:20.613 "namespaces": [ 00:12:20.613 { 00:12:20.613 "nsid": 1, 00:12:20.613 "bdev_name": "Null1", 00:12:20.613 "name": "Null1", 00:12:20.613 "nguid": "A7B402A21C4242B8AE55B7560A5D824F", 00:12:20.613 "uuid": "a7b402a2-1c42-42b8-ae55-b7560a5d824f" 00:12:20.613 } 00:12:20.613 ] 00:12:20.613 }, 00:12:20.613 { 00:12:20.613 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:20.613 "subtype": "NVMe", 00:12:20.613 "listen_addresses": [ 00:12:20.613 { 00:12:20.613 "trtype": "TCP", 00:12:20.613 "adrfam": "IPv4", 00:12:20.613 "traddr": "10.0.0.2", 00:12:20.613 "trsvcid": "4420" 00:12:20.613 } 00:12:20.613 ], 00:12:20.613 "allow_any_host": true, 00:12:20.613 "hosts": [], 00:12:20.613 "serial_number": "SPDK00000000000002", 00:12:20.613 "model_number": "SPDK bdev Controller", 00:12:20.613 "max_namespaces": 32, 00:12:20.613 "min_cntlid": 1, 00:12:20.613 "max_cntlid": 65519, 00:12:20.613 "namespaces": [ 00:12:20.613 { 00:12:20.613 "nsid": 1, 00:12:20.613 "bdev_name": "Null2", 00:12:20.613 "name": "Null2", 00:12:20.613 "nguid": "68407F7B79FE4B32A2C4C8199086B6EB", 00:12:20.613 "uuid": "68407f7b-79fe-4b32-a2c4-c8199086b6eb" 00:12:20.613 } 00:12:20.613 ] 00:12:20.613 }, 00:12:20.613 { 00:12:20.613 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:20.613 "subtype": "NVMe", 00:12:20.613 "listen_addresses": [ 00:12:20.613 { 00:12:20.613 "trtype": "TCP", 00:12:20.613 "adrfam": "IPv4", 00:12:20.613 "traddr": "10.0.0.2", 00:12:20.613 "trsvcid": "4420" 00:12:20.613 } 00:12:20.613 ], 00:12:20.613 "allow_any_host": true, 00:12:20.613 "hosts": [], 00:12:20.613 "serial_number": "SPDK00000000000003", 00:12:20.613 "model_number": "SPDK bdev Controller", 00:12:20.613 "max_namespaces": 32, 00:12:20.613 "min_cntlid": 1, 00:12:20.613 "max_cntlid": 65519, 00:12:20.613 "namespaces": [ 00:12:20.613 { 00:12:20.613 "nsid": 1, 00:12:20.613 "bdev_name": "Null3", 00:12:20.613 "name": "Null3", 00:12:20.613 "nguid": "E0FD92884E104C01A48738744D6C05D6", 00:12:20.613 "uuid": "e0fd9288-4e10-4c01-a487-38744d6c05d6" 00:12:20.613 } 00:12:20.613 ] 00:12:20.613 }, 00:12:20.613 { 00:12:20.613 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:20.613 "subtype": "NVMe", 00:12:20.613 "listen_addresses": [ 00:12:20.613 { 00:12:20.613 "trtype": "TCP", 00:12:20.613 "adrfam": "IPv4", 00:12:20.613 "traddr": "10.0.0.2", 00:12:20.613 "trsvcid": "4420" 00:12:20.613 } 00:12:20.613 ], 00:12:20.613 "allow_any_host": true, 00:12:20.613 "hosts": [], 00:12:20.613 "serial_number": "SPDK00000000000004", 00:12:20.613 "model_number": "SPDK bdev Controller", 00:12:20.613 "max_namespaces": 32, 00:12:20.613 "min_cntlid": 1, 00:12:20.613 "max_cntlid": 65519, 00:12:20.613 "namespaces": [ 00:12:20.613 { 00:12:20.613 "nsid": 1, 00:12:20.613 "bdev_name": "Null4", 00:12:20.613 "name": "Null4", 00:12:20.613 "nguid": "D6CA5DD1966642D98205A6C80E39B1C2", 00:12:20.613 "uuid": "d6ca5dd1-9666-42d9-8205-a6c80e39b1c2" 00:12:20.613 } 00:12:20.613 ] 00:12:20.613 } 00:12:20.613 ] 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:20.613 rmmod nvme_tcp 00:12:20.613 rmmod nvme_fabrics 00:12:20.613 rmmod nvme_keyring 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1534445 ']' 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1534445 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1534445 ']' 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1534445 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1534445 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1534445' 00:12:20.613 killing process with pid 1534445 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1534445 00:12:20.613 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1534445 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.872 01:23:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:23.409 00:12:23.409 real 0m5.565s 00:12:23.409 user 0m4.591s 00:12:23.409 sys 0m1.974s 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.409 ************************************ 00:12:23.409 END TEST nvmf_target_discovery 00:12:23.409 ************************************ 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:23.409 ************************************ 00:12:23.409 START TEST nvmf_referrals 00:12:23.409 ************************************ 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:23.409 * Looking for test storage... 00:12:23.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:23.409 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.410 --rc genhtml_branch_coverage=1 00:12:23.410 --rc genhtml_function_coverage=1 00:12:23.410 --rc genhtml_legend=1 00:12:23.410 --rc geninfo_all_blocks=1 00:12:23.410 --rc geninfo_unexecuted_blocks=1 00:12:23.410 00:12:23.410 ' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.410 --rc genhtml_branch_coverage=1 00:12:23.410 --rc genhtml_function_coverage=1 00:12:23.410 --rc genhtml_legend=1 00:12:23.410 --rc geninfo_all_blocks=1 00:12:23.410 --rc geninfo_unexecuted_blocks=1 00:12:23.410 00:12:23.410 ' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.410 --rc genhtml_branch_coverage=1 00:12:23.410 --rc genhtml_function_coverage=1 00:12:23.410 --rc genhtml_legend=1 00:12:23.410 --rc geninfo_all_blocks=1 00:12:23.410 --rc geninfo_unexecuted_blocks=1 00:12:23.410 00:12:23.410 ' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:23.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:23.410 --rc genhtml_branch_coverage=1 00:12:23.410 --rc genhtml_function_coverage=1 00:12:23.410 --rc genhtml_legend=1 00:12:23.410 --rc geninfo_all_blocks=1 00:12:23.410 --rc geninfo_unexecuted_blocks=1 00:12:23.410 00:12:23.410 ' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:23.410 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:23.410 01:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:25.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:25.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:25.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:25.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.313 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:12:25.314 00:12:25.314 --- 10.0.0.2 ping statistics --- 00:12:25.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.314 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:25.314 00:12:25.314 --- 10.0.0.1 ping statistics --- 00:12:25.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.314 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1536437 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1536437 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1536437 ']' 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.314 01:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.314 [2024-10-13 01:23:10.844730] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:12:25.314 [2024-10-13 01:23:10.844843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.572 [2024-10-13 01:23:10.918736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.572 [2024-10-13 01:23:10.967168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.572 [2024-10-13 01:23:10.967230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.572 [2024-10-13 01:23:10.967258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.572 [2024-10-13 01:23:10.967269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.573 [2024-10-13 01:23:10.967279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.573 [2024-10-13 01:23:10.969001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.573 [2024-10-13 01:23:10.969069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.573 [2024-10-13 01:23:10.969134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.573 [2024-10-13 01:23:10.969137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.573 [2024-10-13 01:23:11.122226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.573 [2024-10-13 01:23:11.134501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.573 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.831 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.089 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.347 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.605 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:26.605 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:26.605 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:26.605 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:26.605 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:26.605 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.605 01:23:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.863 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.121 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:27.121 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:27.121 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:27.121 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:27.121 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:27.121 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.121 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.379 01:23:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.637 rmmod nvme_tcp 00:12:27.637 rmmod nvme_fabrics 00:12:27.637 rmmod nvme_keyring 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1536437 ']' 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1536437 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1536437 ']' 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1536437 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:27.637 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1536437 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1536437' 00:12:27.895 killing process with pid 1536437 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1536437 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1536437 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:27.895 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:12:28.153 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.153 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.153 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.153 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.153 01:23:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.053 00:12:30.053 real 0m7.070s 00:12:30.053 user 0m11.469s 00:12:30.053 sys 0m2.248s 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 ************************************ 00:12:30.053 END TEST nvmf_referrals 00:12:30.053 ************************************ 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 ************************************ 00:12:30.053 START TEST nvmf_connect_disconnect 00:12:30.053 ************************************ 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:30.053 * Looking for test storage... 00:12:30.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:30.053 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:30.311 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:30.311 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.312 --rc genhtml_branch_coverage=1 00:12:30.312 --rc genhtml_function_coverage=1 00:12:30.312 --rc genhtml_legend=1 00:12:30.312 --rc geninfo_all_blocks=1 00:12:30.312 --rc geninfo_unexecuted_blocks=1 00:12:30.312 00:12:30.312 ' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.312 --rc genhtml_branch_coverage=1 00:12:30.312 --rc genhtml_function_coverage=1 00:12:30.312 --rc genhtml_legend=1 00:12:30.312 --rc geninfo_all_blocks=1 00:12:30.312 --rc geninfo_unexecuted_blocks=1 00:12:30.312 00:12:30.312 ' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.312 --rc genhtml_branch_coverage=1 00:12:30.312 --rc genhtml_function_coverage=1 00:12:30.312 --rc genhtml_legend=1 00:12:30.312 --rc geninfo_all_blocks=1 00:12:30.312 --rc geninfo_unexecuted_blocks=1 00:12:30.312 00:12:30.312 ' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:30.312 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.312 --rc genhtml_branch_coverage=1 00:12:30.312 --rc genhtml_function_coverage=1 00:12:30.312 --rc genhtml_legend=1 00:12:30.312 --rc geninfo_all_blocks=1 00:12:30.312 --rc geninfo_unexecuted_blocks=1 00:12:30.312 00:12:30.312 ' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:30.312 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:30.313 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.313 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.313 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.313 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:30.313 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:30.313 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.313 01:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.212 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:32.213 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:32.213 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:32.213 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:32.213 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.213 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.489 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.489 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.489 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.489 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.489 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.489 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.489 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.489 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:12:32.489 00:12:32.489 --- 10.0.0.2 ping statistics --- 00:12:32.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.489 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:32.489 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:12:32.490 00:12:32.490 --- 10.0.0.1 ping statistics --- 00:12:32.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.490 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1538852 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1538852 00:12:32.490 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1538852 ']' 00:12:32.491 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.491 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.491 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.491 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.491 01:23:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.491 [2024-10-13 01:23:17.990727] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:12:32.491 [2024-10-13 01:23:17.990810] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.755 [2024-10-13 01:23:18.068601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.755 [2024-10-13 01:23:18.120849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.755 [2024-10-13 01:23:18.120909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.755 [2024-10-13 01:23:18.120933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.755 [2024-10-13 01:23:18.120948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.755 [2024-10-13 01:23:18.120960] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.755 [2024-10-13 01:23:18.122631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.755 [2024-10-13 01:23:18.122666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.755 [2024-10-13 01:23:18.122720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.755 [2024-10-13 01:23:18.122724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.755 [2024-10-13 01:23:18.275696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.755 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:33.013 [2024-10-13 01:23:18.337374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.013 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.013 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:33.013 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:33.013 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:33.013 01:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:35.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.144 [2024-10-13 01:24:18.641258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238ecb0 is same with the state(6) to be set 00:13:33.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.053 [2024-10-13 01:26:07.009322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238f330 is same with the state(6) to be set 00:15:22.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.440 [2024-10-13 01:26:25.496267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238f330 is same with the state(6) to be set 00:15:40.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:24.175 rmmod nvme_tcp 00:16:24.175 rmmod nvme_fabrics 00:16:24.175 rmmod nvme_keyring 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1538852 ']' 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1538852 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1538852 ']' 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1538852 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1538852 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1538852' 00:16:24.175 killing process with pid 1538852 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1538852 00:16:24.175 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1538852 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:24.434 01:27:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.334 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:26.334 00:16:26.334 real 3m56.274s 00:16:26.334 user 15m0.023s 00:16:26.334 sys 0m35.485s 00:16:26.334 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:26.334 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:26.334 ************************************ 00:16:26.334 END TEST nvmf_connect_disconnect 00:16:26.334 ************************************ 00:16:26.334 01:27:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:26.334 01:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:26.334 01:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:26.334 01:27:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.334 ************************************ 00:16:26.334 START TEST nvmf_multitarget 00:16:26.334 ************************************ 00:16:26.334 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:26.593 * Looking for test storage... 00:16:26.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.593 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:26.593 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:16:26.593 01:27:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:26.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.593 --rc genhtml_branch_coverage=1 00:16:26.593 --rc genhtml_function_coverage=1 00:16:26.593 --rc genhtml_legend=1 00:16:26.593 --rc geninfo_all_blocks=1 00:16:26.593 --rc geninfo_unexecuted_blocks=1 00:16:26.593 00:16:26.593 ' 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:26.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.593 --rc genhtml_branch_coverage=1 00:16:26.593 --rc genhtml_function_coverage=1 00:16:26.593 --rc genhtml_legend=1 00:16:26.593 --rc geninfo_all_blocks=1 00:16:26.593 --rc geninfo_unexecuted_blocks=1 00:16:26.593 00:16:26.593 ' 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:26.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.593 --rc genhtml_branch_coverage=1 00:16:26.593 --rc genhtml_function_coverage=1 00:16:26.593 --rc genhtml_legend=1 00:16:26.593 --rc geninfo_all_blocks=1 00:16:26.593 --rc geninfo_unexecuted_blocks=1 00:16:26.593 00:16:26.593 ' 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:26.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.593 --rc genhtml_branch_coverage=1 00:16:26.593 --rc genhtml_function_coverage=1 00:16:26.593 --rc genhtml_legend=1 00:16:26.593 --rc geninfo_all_blocks=1 00:16:26.593 --rc geninfo_unexecuted_blocks=1 00:16:26.593 00:16:26.593 ' 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.593 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:26.594 01:27:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:28.495 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:28.495 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:28.495 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.495 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:28.496 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:28.496 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:28.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:16:28.754 00:16:28.754 --- 10.0.0.2 ping statistics --- 00:16:28.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.754 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:16:28.754 00:16:28.754 --- 10.0.0.1 ping statistics --- 00:16:28.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.754 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1570483 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1570483 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1570483 ']' 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.754 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:28.754 [2024-10-13 01:27:14.256260] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:16:28.755 [2024-10-13 01:27:14.256348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.755 [2024-10-13 01:27:14.320532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.013 [2024-10-13 01:27:14.368329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.013 [2024-10-13 01:27:14.368389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.013 [2024-10-13 01:27:14.368402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.013 [2024-10-13 01:27:14.368413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.013 [2024-10-13 01:27:14.368423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.013 [2024-10-13 01:27:14.369969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.013 [2024-10-13 01:27:14.369997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.013 [2024-10-13 01:27:14.370056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.013 [2024-10-13 01:27:14.370059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.013 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.013 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:16:29.013 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:29.013 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:29.013 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:29.013 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.013 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:29.013 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:29.013 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:29.270 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:29.271 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:29.271 "nvmf_tgt_1" 00:16:29.271 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:29.271 "nvmf_tgt_2" 00:16:29.527 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:29.527 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:29.527 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:29.527 01:27:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:29.527 true 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:29.784 true 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.784 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.784 rmmod nvme_tcp 00:16:29.784 rmmod nvme_fabrics 00:16:30.042 rmmod nvme_keyring 00:16:30.042 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:30.042 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:30.042 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:30.042 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1570483 ']' 00:16:30.042 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1570483 00:16:30.042 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1570483 ']' 00:16:30.042 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1570483 00:16:30.042 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1570483 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1570483' 00:16:30.043 killing process with pid 1570483 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1570483 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1570483 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:30.043 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:30.302 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:16:30.302 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:30.302 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:16:30.302 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:30.302 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:30.302 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.302 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.302 01:27:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:32.203 00:16:32.203 real 0m5.775s 00:16:32.203 user 0m6.681s 00:16:32.203 sys 0m1.883s 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:32.203 ************************************ 00:16:32.203 END TEST nvmf_multitarget 00:16:32.203 ************************************ 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:32.203 ************************************ 00:16:32.203 START TEST nvmf_rpc 00:16:32.203 ************************************ 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:32.203 * Looking for test storage... 00:16:32.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:32.203 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:32.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.462 --rc genhtml_branch_coverage=1 00:16:32.462 --rc genhtml_function_coverage=1 00:16:32.462 --rc genhtml_legend=1 00:16:32.462 --rc geninfo_all_blocks=1 00:16:32.462 --rc geninfo_unexecuted_blocks=1 00:16:32.462 00:16:32.462 ' 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:32.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.462 --rc genhtml_branch_coverage=1 00:16:32.462 --rc genhtml_function_coverage=1 00:16:32.462 --rc genhtml_legend=1 00:16:32.462 --rc geninfo_all_blocks=1 00:16:32.462 --rc geninfo_unexecuted_blocks=1 00:16:32.462 00:16:32.462 ' 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:32.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.462 --rc genhtml_branch_coverage=1 00:16:32.462 --rc genhtml_function_coverage=1 00:16:32.462 --rc genhtml_legend=1 00:16:32.462 --rc geninfo_all_blocks=1 00:16:32.462 --rc geninfo_unexecuted_blocks=1 00:16:32.462 00:16:32.462 ' 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:32.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.462 --rc genhtml_branch_coverage=1 00:16:32.462 --rc genhtml_function_coverage=1 00:16:32.462 --rc genhtml_legend=1 00:16:32.462 --rc geninfo_all_blocks=1 00:16:32.462 --rc geninfo_unexecuted_blocks=1 00:16:32.462 00:16:32.462 ' 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.462 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:32.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:32.463 01:27:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:34.995 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:34.995 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:34.995 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.995 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:34.995 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.996 01:27:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:34.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:16:34.996 00:16:34.996 --- 10.0.0.2 ping statistics --- 00:16:34.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.996 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:16:34.996 00:16:34.996 --- 10.0.0.1 ping statistics --- 00:16:34.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.996 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1572583 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1572583 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1572583 ']' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.996 [2024-10-13 01:27:20.197834] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:16:34.996 [2024-10-13 01:27:20.197935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.996 [2024-10-13 01:27:20.265971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.996 [2024-10-13 01:27:20.314088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.996 [2024-10-13 01:27:20.314142] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.996 [2024-10-13 01:27:20.314165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.996 [2024-10-13 01:27:20.314175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.996 [2024-10-13 01:27:20.314185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.996 [2024-10-13 01:27:20.315759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.996 [2024-10-13 01:27:20.315799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.996 [2024-10-13 01:27:20.315893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.996 [2024-10-13 01:27:20.315890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:34.996 "tick_rate": 2700000000, 00:16:34.996 "poll_groups": [ 00:16:34.996 { 00:16:34.996 "name": "nvmf_tgt_poll_group_000", 00:16:34.996 "admin_qpairs": 0, 00:16:34.996 "io_qpairs": 0, 00:16:34.996 "current_admin_qpairs": 0, 00:16:34.996 "current_io_qpairs": 0, 00:16:34.996 "pending_bdev_io": 0, 00:16:34.996 "completed_nvme_io": 0, 00:16:34.996 "transports": [] 00:16:34.996 }, 00:16:34.996 { 00:16:34.996 "name": "nvmf_tgt_poll_group_001", 00:16:34.996 "admin_qpairs": 0, 00:16:34.996 "io_qpairs": 0, 00:16:34.996 "current_admin_qpairs": 0, 00:16:34.996 "current_io_qpairs": 0, 00:16:34.996 "pending_bdev_io": 0, 00:16:34.996 "completed_nvme_io": 0, 00:16:34.996 "transports": [] 00:16:34.996 }, 00:16:34.996 { 00:16:34.996 "name": "nvmf_tgt_poll_group_002", 00:16:34.996 "admin_qpairs": 0, 00:16:34.996 "io_qpairs": 0, 00:16:34.996 "current_admin_qpairs": 0, 00:16:34.996 "current_io_qpairs": 0, 00:16:34.996 "pending_bdev_io": 0, 00:16:34.996 "completed_nvme_io": 0, 00:16:34.996 "transports": [] 00:16:34.996 }, 00:16:34.996 { 00:16:34.996 "name": "nvmf_tgt_poll_group_003", 00:16:34.996 "admin_qpairs": 0, 00:16:34.996 "io_qpairs": 0, 00:16:34.996 "current_admin_qpairs": 0, 00:16:34.996 "current_io_qpairs": 0, 00:16:34.996 "pending_bdev_io": 0, 00:16:34.996 "completed_nvme_io": 0, 00:16:34.996 "transports": [] 00:16:34.996 } 00:16:34.996 ] 00:16:34.996 }' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:34.996 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:34.997 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.997 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:34.997 [2024-10-13 01:27:20.563129] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:34.997 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:35.255 "tick_rate": 2700000000, 00:16:35.255 "poll_groups": [ 00:16:35.255 { 00:16:35.255 "name": "nvmf_tgt_poll_group_000", 00:16:35.255 "admin_qpairs": 0, 00:16:35.255 "io_qpairs": 0, 00:16:35.255 "current_admin_qpairs": 0, 00:16:35.255 "current_io_qpairs": 0, 00:16:35.255 "pending_bdev_io": 0, 00:16:35.255 "completed_nvme_io": 0, 00:16:35.255 "transports": [ 00:16:35.255 { 00:16:35.255 "trtype": "TCP" 00:16:35.255 } 00:16:35.255 ] 00:16:35.255 }, 00:16:35.255 { 00:16:35.255 "name": "nvmf_tgt_poll_group_001", 00:16:35.255 "admin_qpairs": 0, 00:16:35.255 "io_qpairs": 0, 00:16:35.255 "current_admin_qpairs": 0, 00:16:35.255 "current_io_qpairs": 0, 00:16:35.255 "pending_bdev_io": 0, 00:16:35.255 "completed_nvme_io": 0, 00:16:35.255 "transports": [ 00:16:35.255 { 00:16:35.255 "trtype": "TCP" 00:16:35.255 } 00:16:35.255 ] 00:16:35.255 }, 00:16:35.255 { 00:16:35.255 "name": "nvmf_tgt_poll_group_002", 00:16:35.255 "admin_qpairs": 0, 00:16:35.255 "io_qpairs": 0, 00:16:35.255 "current_admin_qpairs": 0, 00:16:35.255 "current_io_qpairs": 0, 00:16:35.255 "pending_bdev_io": 0, 00:16:35.255 "completed_nvme_io": 0, 00:16:35.255 "transports": [ 00:16:35.255 { 00:16:35.255 "trtype": "TCP" 00:16:35.255 } 00:16:35.255 ] 00:16:35.255 }, 00:16:35.255 { 00:16:35.255 "name": "nvmf_tgt_poll_group_003", 00:16:35.255 "admin_qpairs": 0, 00:16:35.255 "io_qpairs": 0, 00:16:35.255 "current_admin_qpairs": 0, 00:16:35.255 "current_io_qpairs": 0, 00:16:35.255 "pending_bdev_io": 0, 00:16:35.255 "completed_nvme_io": 0, 00:16:35.255 "transports": [ 00:16:35.255 { 00:16:35.255 "trtype": "TCP" 00:16:35.255 } 00:16:35.255 ] 00:16:35.255 } 00:16:35.255 ] 00:16:35.255 }' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.255 Malloc1 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.255 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.256 [2024-10-13 01:27:20.729427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:16:35.256 [2024-10-13 01:27:20.752017] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:35.256 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:35.256 could not add new controller: failed to write to nvme-fabrics device 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.256 01:27:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:35.822 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:35.822 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:35.822 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.822 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:35.822 01:27:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:16:38.349 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.350 [2024-10-13 01:27:23.551385] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:16:38.350 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:38.350 could not add new controller: failed to write to nvme-fabrics device 00:16:38.350 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:38.350 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:38.350 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:38.350 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:38.350 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:38.350 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.350 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:38.350 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.350 01:27:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.915 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.915 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:38.915 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.915 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:38.915 01:27:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:40.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.814 [2024-10-13 01:27:26.346494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.814 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:41.748 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:41.748 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:41.748 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.748 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:41.748 01:27:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:43.666 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:43.666 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:43.666 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.666 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:43.666 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.666 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:43.666 01:27:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.666 [2024-10-13 01:27:29.123081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.666 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.232 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:44.232 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:44.232 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.232 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:44.232 01:27:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 [2024-10-13 01:27:31.897392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.758 01:27:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.016 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.016 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.016 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.016 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:47.016 01:27:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.543 [2024-10-13 01:27:34.757881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.543 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.544 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.544 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.544 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.544 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.544 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.544 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.544 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.544 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.544 01:27:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.108 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.108 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:50.108 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.108 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:50.108 01:27:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:52.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 [2024-10-13 01:27:37.556952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.006 01:27:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:52.939 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.939 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:16:52.939 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.939 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:52.939 01:27:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:54.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.839 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 [2024-10-13 01:27:40.432905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 [2024-10-13 01:27:40.480958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 [2024-10-13 01:27:40.529098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 [2024-10-13 01:27:40.577262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.098 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 [2024-10-13 01:27:40.625428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.099 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.357 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.357 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:55.357 "tick_rate": 2700000000, 00:16:55.357 "poll_groups": [ 00:16:55.357 { 00:16:55.357 "name": "nvmf_tgt_poll_group_000", 00:16:55.357 "admin_qpairs": 2, 00:16:55.357 "io_qpairs": 84, 00:16:55.357 "current_admin_qpairs": 0, 00:16:55.357 "current_io_qpairs": 0, 00:16:55.357 "pending_bdev_io": 0, 00:16:55.357 "completed_nvme_io": 186, 00:16:55.357 "transports": [ 00:16:55.357 { 00:16:55.357 "trtype": "TCP" 00:16:55.357 } 00:16:55.357 ] 00:16:55.357 }, 00:16:55.357 { 00:16:55.357 "name": "nvmf_tgt_poll_group_001", 00:16:55.358 "admin_qpairs": 2, 00:16:55.358 "io_qpairs": 84, 00:16:55.358 "current_admin_qpairs": 0, 00:16:55.358 "current_io_qpairs": 0, 00:16:55.358 "pending_bdev_io": 0, 00:16:55.358 "completed_nvme_io": 139, 00:16:55.358 "transports": [ 00:16:55.358 { 00:16:55.358 "trtype": "TCP" 00:16:55.358 } 00:16:55.358 ] 00:16:55.358 }, 00:16:55.358 { 00:16:55.358 "name": "nvmf_tgt_poll_group_002", 00:16:55.358 "admin_qpairs": 1, 00:16:55.358 "io_qpairs": 84, 00:16:55.358 "current_admin_qpairs": 0, 00:16:55.358 "current_io_qpairs": 0, 00:16:55.358 "pending_bdev_io": 0, 00:16:55.358 "completed_nvme_io": 134, 00:16:55.358 "transports": [ 00:16:55.358 { 00:16:55.358 "trtype": "TCP" 00:16:55.358 } 00:16:55.358 ] 00:16:55.358 }, 00:16:55.358 { 00:16:55.358 "name": "nvmf_tgt_poll_group_003", 00:16:55.358 "admin_qpairs": 2, 00:16:55.358 "io_qpairs": 84, 00:16:55.358 "current_admin_qpairs": 0, 00:16:55.358 "current_io_qpairs": 0, 00:16:55.358 "pending_bdev_io": 0, 00:16:55.358 "completed_nvme_io": 227, 00:16:55.358 "transports": [ 00:16:55.358 { 00:16:55.358 "trtype": "TCP" 00:16:55.358 } 00:16:55.358 ] 00:16:55.358 } 00:16:55.358 ] 00:16:55.358 }' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.358 rmmod nvme_tcp 00:16:55.358 rmmod nvme_fabrics 00:16:55.358 rmmod nvme_keyring 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1572583 ']' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1572583 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1572583 ']' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1572583 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1572583 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1572583' 00:16:55.358 killing process with pid 1572583 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1572583 00:16:55.358 01:27:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1572583 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.616 01:27:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:58.176 00:16:58.176 real 0m25.389s 00:16:58.176 user 1m22.405s 00:16:58.176 sys 0m4.193s 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:58.176 ************************************ 00:16:58.176 END TEST nvmf_rpc 00:16:58.176 ************************************ 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:58.176 ************************************ 00:16:58.176 START TEST nvmf_invalid 00:16:58.176 ************************************ 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:58.176 * Looking for test storage... 00:16:58.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:16:58.176 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:58.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.177 --rc genhtml_branch_coverage=1 00:16:58.177 --rc genhtml_function_coverage=1 00:16:58.177 --rc genhtml_legend=1 00:16:58.177 --rc geninfo_all_blocks=1 00:16:58.177 --rc geninfo_unexecuted_blocks=1 00:16:58.177 00:16:58.177 ' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:58.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.177 --rc genhtml_branch_coverage=1 00:16:58.177 --rc genhtml_function_coverage=1 00:16:58.177 --rc genhtml_legend=1 00:16:58.177 --rc geninfo_all_blocks=1 00:16:58.177 --rc geninfo_unexecuted_blocks=1 00:16:58.177 00:16:58.177 ' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:58.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.177 --rc genhtml_branch_coverage=1 00:16:58.177 --rc genhtml_function_coverage=1 00:16:58.177 --rc genhtml_legend=1 00:16:58.177 --rc geninfo_all_blocks=1 00:16:58.177 --rc geninfo_unexecuted_blocks=1 00:16:58.177 00:16:58.177 ' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:58.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.177 --rc genhtml_branch_coverage=1 00:16:58.177 --rc genhtml_function_coverage=1 00:16:58.177 --rc genhtml_legend=1 00:16:58.177 --rc geninfo_all_blocks=1 00:16:58.177 --rc geninfo_unexecuted_blocks=1 00:16:58.177 00:16:58.177 ' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:58.177 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:58.178 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.178 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.178 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.178 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:58.178 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:58.178 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:58.178 01:27:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:00.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:00.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:00.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:00.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:00.079 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:00.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:17:00.079 00:17:00.080 --- 10.0.0.2 ping statistics --- 00:17:00.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.080 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:17:00.080 00:17:00.080 --- 10.0.0.1 ping statistics --- 00:17:00.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.080 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:00.080 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:00.338 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1577092 00:17:00.338 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:00.338 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1577092 00:17:00.338 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1577092 ']' 00:17:00.338 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.338 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.338 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.338 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.338 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:00.338 [2024-10-13 01:27:45.701956] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:17:00.338 [2024-10-13 01:27:45.702043] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.338 [2024-10-13 01:27:45.764650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.338 [2024-10-13 01:27:45.810454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.338 [2024-10-13 01:27:45.810525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.338 [2024-10-13 01:27:45.810549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.338 [2024-10-13 01:27:45.810560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.338 [2024-10-13 01:27:45.810570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.338 [2024-10-13 01:27:45.812003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.338 [2024-10-13 01:27:45.812062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.338 [2024-10-13 01:27:45.812128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.338 [2024-10-13 01:27:45.812131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.596 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.596 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:17:00.596 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:00.596 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:00.596 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:00.596 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.596 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:00.596 01:27:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6108 00:17:00.854 [2024-10-13 01:27:46.215052] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:00.854 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:00.854 { 00:17:00.854 "nqn": "nqn.2016-06.io.spdk:cnode6108", 00:17:00.854 "tgt_name": "foobar", 00:17:00.854 "method": "nvmf_create_subsystem", 00:17:00.854 "req_id": 1 00:17:00.854 } 00:17:00.854 Got JSON-RPC error response 00:17:00.854 response: 00:17:00.854 { 00:17:00.854 "code": -32603, 00:17:00.854 "message": "Unable to find target foobar" 00:17:00.854 }' 00:17:00.854 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:00.854 { 00:17:00.854 "nqn": "nqn.2016-06.io.spdk:cnode6108", 00:17:00.854 "tgt_name": "foobar", 00:17:00.854 "method": "nvmf_create_subsystem", 00:17:00.854 "req_id": 1 00:17:00.854 } 00:17:00.854 Got JSON-RPC error response 00:17:00.854 response: 00:17:00.854 { 00:17:00.854 "code": -32603, 00:17:00.854 "message": "Unable to find target foobar" 00:17:00.854 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:00.854 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:00.854 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24919 00:17:01.112 [2024-10-13 01:27:46.540159] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24919: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:01.112 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:01.112 { 00:17:01.112 "nqn": "nqn.2016-06.io.spdk:cnode24919", 00:17:01.112 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:01.112 "method": "nvmf_create_subsystem", 00:17:01.112 "req_id": 1 00:17:01.112 } 00:17:01.112 Got JSON-RPC error response 00:17:01.112 response: 00:17:01.112 { 00:17:01.112 "code": -32602, 00:17:01.112 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:01.112 }' 00:17:01.112 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:01.112 { 00:17:01.112 "nqn": "nqn.2016-06.io.spdk:cnode24919", 00:17:01.112 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:01.112 "method": "nvmf_create_subsystem", 00:17:01.112 "req_id": 1 00:17:01.112 } 00:17:01.112 Got JSON-RPC error response 00:17:01.112 response: 00:17:01.112 { 00:17:01.112 "code": -32602, 00:17:01.112 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:01.112 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:01.112 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:01.112 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17187 00:17:01.370 [2024-10-13 01:27:46.817039] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17187: invalid model number 'SPDK_Controller' 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:01.370 { 00:17:01.370 "nqn": "nqn.2016-06.io.spdk:cnode17187", 00:17:01.370 "model_number": "SPDK_Controller\u001f", 00:17:01.370 "method": "nvmf_create_subsystem", 00:17:01.370 "req_id": 1 00:17:01.370 } 00:17:01.370 Got JSON-RPC error response 00:17:01.370 response: 00:17:01.370 { 00:17:01.370 "code": -32602, 00:17:01.370 "message": "Invalid MN SPDK_Controller\u001f" 00:17:01.370 }' 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:01.370 { 00:17:01.370 "nqn": "nqn.2016-06.io.spdk:cnode17187", 00:17:01.370 "model_number": "SPDK_Controller\u001f", 00:17:01.370 "method": "nvmf_create_subsystem", 00:17:01.370 "req_id": 1 00:17:01.370 } 00:17:01.370 Got JSON-RPC error response 00:17:01.370 response: 00:17:01.370 { 00:17:01.370 "code": -32602, 00:17:01.370 "message": "Invalid MN SPDK_Controller\u001f" 00:17:01.370 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:01.370 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Jy/081ur?_q520:$Ns$|r' 00:17:01.371 01:27:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Jy/081ur?_q520:$Ns$|r' nqn.2016-06.io.spdk:cnode7787 00:17:01.629 [2024-10-13 01:27:47.178290] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7787: invalid serial number 'Jy/081ur?_q520:$Ns$|r' 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:01.629 { 00:17:01.629 "nqn": "nqn.2016-06.io.spdk:cnode7787", 00:17:01.629 "serial_number": "Jy/081ur?_q520:$Ns$|r", 00:17:01.629 "method": "nvmf_create_subsystem", 00:17:01.629 "req_id": 1 00:17:01.629 } 00:17:01.629 Got JSON-RPC error response 00:17:01.629 response: 00:17:01.629 { 00:17:01.629 "code": -32602, 00:17:01.629 "message": "Invalid SN Jy/081ur?_q520:$Ns$|r" 00:17:01.629 }' 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:01.629 { 00:17:01.629 "nqn": "nqn.2016-06.io.spdk:cnode7787", 00:17:01.629 "serial_number": "Jy/081ur?_q520:$Ns$|r", 00:17:01.629 "method": "nvmf_create_subsystem", 00:17:01.629 "req_id": 1 00:17:01.629 } 00:17:01.629 Got JSON-RPC error response 00:17:01.629 response: 00:17:01.629 { 00:17:01.629 "code": -32602, 00:17:01.629 "message": "Invalid SN Jy/081ur?_q520:$Ns$|r" 00:17:01.629 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:01.629 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.888 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.889 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''3O@n,$fXQvlNJFBO128eJPte6Y:%DCCP5jAk_:UR' 00:17:01.890 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ''\''3O@n,$fXQvlNJFBO128eJPte6Y:%DCCP5jAk_:UR' nqn.2016-06.io.spdk:cnode20118 00:17:02.148 [2024-10-13 01:27:47.659912] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20118: invalid model number ''3O@n,$fXQvlNJFBO128eJPte6Y:%DCCP5jAk_:UR' 00:17:02.148 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:02.148 { 00:17:02.148 "nqn": "nqn.2016-06.io.spdk:cnode20118", 00:17:02.148 "model_number": "'\''3O@n,$fXQvlNJFBO128eJPte6Y:%DCCP5jAk_:UR", 00:17:02.148 "method": "nvmf_create_subsystem", 00:17:02.148 "req_id": 1 00:17:02.148 } 00:17:02.148 Got JSON-RPC error response 00:17:02.148 response: 00:17:02.148 { 00:17:02.148 "code": -32602, 00:17:02.148 "message": "Invalid MN '\''3O@n,$fXQvlNJFBO128eJPte6Y:%DCCP5jAk_:UR" 00:17:02.148 }' 00:17:02.148 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:02.148 { 00:17:02.148 "nqn": "nqn.2016-06.io.spdk:cnode20118", 00:17:02.148 "model_number": "'3O@n,$fXQvlNJFBO128eJPte6Y:%DCCP5jAk_:UR", 00:17:02.148 "method": "nvmf_create_subsystem", 00:17:02.148 "req_id": 1 00:17:02.148 } 00:17:02.148 Got JSON-RPC error response 00:17:02.148 response: 00:17:02.148 { 00:17:02.148 "code": -32602, 00:17:02.148 "message": "Invalid MN '3O@n,$fXQvlNJFBO128eJPte6Y:%DCCP5jAk_:UR" 00:17:02.148 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:02.148 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:02.405 [2024-10-13 01:27:47.936861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.406 01:27:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:02.970 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:02.970 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:02.970 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:02.970 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:02.970 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:02.970 [2024-10-13 01:27:48.534861] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:03.227 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:03.227 { 00:17:03.227 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:03.227 "listen_address": { 00:17:03.227 "trtype": "tcp", 00:17:03.227 "traddr": "", 00:17:03.227 "trsvcid": "4421" 00:17:03.227 }, 00:17:03.227 "method": "nvmf_subsystem_remove_listener", 00:17:03.227 "req_id": 1 00:17:03.227 } 00:17:03.227 Got JSON-RPC error response 00:17:03.227 response: 00:17:03.227 { 00:17:03.227 "code": -32602, 00:17:03.227 "message": "Invalid parameters" 00:17:03.227 }' 00:17:03.227 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:03.227 { 00:17:03.227 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:03.227 "listen_address": { 00:17:03.227 "trtype": "tcp", 00:17:03.227 "traddr": "", 00:17:03.227 "trsvcid": "4421" 00:17:03.227 }, 00:17:03.227 "method": "nvmf_subsystem_remove_listener", 00:17:03.227 "req_id": 1 00:17:03.227 } 00:17:03.227 Got JSON-RPC error response 00:17:03.227 response: 00:17:03.227 { 00:17:03.227 "code": -32602, 00:17:03.227 "message": "Invalid parameters" 00:17:03.227 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:03.227 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18586 -i 0 00:17:03.227 [2024-10-13 01:27:48.803697] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18586: invalid cntlid range [0-65519] 00:17:03.485 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:03.485 { 00:17:03.485 "nqn": "nqn.2016-06.io.spdk:cnode18586", 00:17:03.485 "min_cntlid": 0, 00:17:03.485 "method": "nvmf_create_subsystem", 00:17:03.485 "req_id": 1 00:17:03.485 } 00:17:03.485 Got JSON-RPC error response 00:17:03.485 response: 00:17:03.485 { 00:17:03.485 "code": -32602, 00:17:03.485 "message": "Invalid cntlid range [0-65519]" 00:17:03.485 }' 00:17:03.485 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:03.485 { 00:17:03.485 "nqn": "nqn.2016-06.io.spdk:cnode18586", 00:17:03.485 "min_cntlid": 0, 00:17:03.485 "method": "nvmf_create_subsystem", 00:17:03.485 "req_id": 1 00:17:03.485 } 00:17:03.485 Got JSON-RPC error response 00:17:03.485 response: 00:17:03.485 { 00:17:03.485 "code": -32602, 00:17:03.485 "message": "Invalid cntlid range [0-65519]" 00:17:03.485 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:03.485 01:27:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2741 -i 65520 00:17:03.742 [2024-10-13 01:27:49.072644] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2741: invalid cntlid range [65520-65519] 00:17:03.742 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:03.742 { 00:17:03.742 "nqn": "nqn.2016-06.io.spdk:cnode2741", 00:17:03.742 "min_cntlid": 65520, 00:17:03.742 "method": "nvmf_create_subsystem", 00:17:03.742 "req_id": 1 00:17:03.742 } 00:17:03.742 Got JSON-RPC error response 00:17:03.742 response: 00:17:03.742 { 00:17:03.742 "code": -32602, 00:17:03.742 "message": "Invalid cntlid range [65520-65519]" 00:17:03.742 }' 00:17:03.742 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:03.742 { 00:17:03.742 "nqn": "nqn.2016-06.io.spdk:cnode2741", 00:17:03.742 "min_cntlid": 65520, 00:17:03.742 "method": "nvmf_create_subsystem", 00:17:03.742 "req_id": 1 00:17:03.742 } 00:17:03.742 Got JSON-RPC error response 00:17:03.742 response: 00:17:03.742 { 00:17:03.742 "code": -32602, 00:17:03.742 "message": "Invalid cntlid range [65520-65519]" 00:17:03.742 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:03.742 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17411 -I 0 00:17:03.999 [2024-10-13 01:27:49.353524] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17411: invalid cntlid range [1-0] 00:17:03.999 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:03.999 { 00:17:03.999 "nqn": "nqn.2016-06.io.spdk:cnode17411", 00:17:03.999 "max_cntlid": 0, 00:17:03.999 "method": "nvmf_create_subsystem", 00:17:03.999 "req_id": 1 00:17:03.999 } 00:17:03.999 Got JSON-RPC error response 00:17:03.999 response: 00:17:03.999 { 00:17:03.999 "code": -32602, 00:17:03.999 "message": "Invalid cntlid range [1-0]" 00:17:03.999 }' 00:17:03.999 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:03.999 { 00:17:03.999 "nqn": "nqn.2016-06.io.spdk:cnode17411", 00:17:03.999 "max_cntlid": 0, 00:17:03.999 "method": "nvmf_create_subsystem", 00:17:03.999 "req_id": 1 00:17:03.999 } 00:17:03.999 Got JSON-RPC error response 00:17:03.999 response: 00:17:03.999 { 00:17:03.999 "code": -32602, 00:17:03.999 "message": "Invalid cntlid range [1-0]" 00:17:03.999 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:03.999 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8357 -I 65520 00:17:04.257 [2024-10-13 01:27:49.614414] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8357: invalid cntlid range [1-65520] 00:17:04.257 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:04.257 { 00:17:04.257 "nqn": "nqn.2016-06.io.spdk:cnode8357", 00:17:04.257 "max_cntlid": 65520, 00:17:04.257 "method": "nvmf_create_subsystem", 00:17:04.257 "req_id": 1 00:17:04.257 } 00:17:04.257 Got JSON-RPC error response 00:17:04.257 response: 00:17:04.257 { 00:17:04.257 "code": -32602, 00:17:04.257 "message": "Invalid cntlid range [1-65520]" 00:17:04.257 }' 00:17:04.257 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:04.257 { 00:17:04.257 "nqn": "nqn.2016-06.io.spdk:cnode8357", 00:17:04.257 "max_cntlid": 65520, 00:17:04.257 "method": "nvmf_create_subsystem", 00:17:04.257 "req_id": 1 00:17:04.257 } 00:17:04.257 Got JSON-RPC error response 00:17:04.257 response: 00:17:04.257 { 00:17:04.257 "code": -32602, 00:17:04.257 "message": "Invalid cntlid range [1-65520]" 00:17:04.257 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:04.257 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32397 -i 6 -I 5 00:17:04.515 [2024-10-13 01:27:49.875280] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32397: invalid cntlid range [6-5] 00:17:04.515 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:04.515 { 00:17:04.515 "nqn": "nqn.2016-06.io.spdk:cnode32397", 00:17:04.515 "min_cntlid": 6, 00:17:04.515 "max_cntlid": 5, 00:17:04.515 "method": "nvmf_create_subsystem", 00:17:04.515 "req_id": 1 00:17:04.515 } 00:17:04.515 Got JSON-RPC error response 00:17:04.515 response: 00:17:04.515 { 00:17:04.515 "code": -32602, 00:17:04.515 "message": "Invalid cntlid range [6-5]" 00:17:04.515 }' 00:17:04.515 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:04.515 { 00:17:04.515 "nqn": "nqn.2016-06.io.spdk:cnode32397", 00:17:04.515 "min_cntlid": 6, 00:17:04.515 "max_cntlid": 5, 00:17:04.515 "method": "nvmf_create_subsystem", 00:17:04.515 "req_id": 1 00:17:04.515 } 00:17:04.515 Got JSON-RPC error response 00:17:04.515 response: 00:17:04.515 { 00:17:04.515 "code": -32602, 00:17:04.515 "message": "Invalid cntlid range [6-5]" 00:17:04.515 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:04.515 01:27:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:04.515 { 00:17:04.515 "name": "foobar", 00:17:04.515 "method": "nvmf_delete_target", 00:17:04.515 "req_id": 1 00:17:04.515 } 00:17:04.515 Got JSON-RPC error response 00:17:04.515 response: 00:17:04.515 { 00:17:04.515 "code": -32602, 00:17:04.515 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:04.515 }' 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:04.515 { 00:17:04.515 "name": "foobar", 00:17:04.515 "method": "nvmf_delete_target", 00:17:04.515 "req_id": 1 00:17:04.515 } 00:17:04.515 Got JSON-RPC error response 00:17:04.515 response: 00:17:04.515 { 00:17:04.515 "code": -32602, 00:17:04.515 "message": "The specified target doesn't exist, cannot delete it." 00:17:04.515 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.515 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.515 rmmod nvme_tcp 00:17:04.515 rmmod nvme_fabrics 00:17:04.515 rmmod nvme_keyring 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1577092 ']' 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1577092 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1577092 ']' 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1577092 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1577092 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1577092' 00:17:04.774 killing process with pid 1577092 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1577092 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1577092 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.774 01:27:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.307 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:07.307 00:17:07.307 real 0m9.214s 00:17:07.307 user 0m22.239s 00:17:07.307 sys 0m2.600s 00:17:07.307 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.307 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:07.307 ************************************ 00:17:07.307 END TEST nvmf_invalid 00:17:07.307 ************************************ 00:17:07.307 01:27:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.308 ************************************ 00:17:07.308 START TEST nvmf_connect_stress 00:17:07.308 ************************************ 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:07.308 * Looking for test storage... 00:17:07.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:07.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.308 --rc genhtml_branch_coverage=1 00:17:07.308 --rc genhtml_function_coverage=1 00:17:07.308 --rc genhtml_legend=1 00:17:07.308 --rc geninfo_all_blocks=1 00:17:07.308 --rc geninfo_unexecuted_blocks=1 00:17:07.308 00:17:07.308 ' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:07.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.308 --rc genhtml_branch_coverage=1 00:17:07.308 --rc genhtml_function_coverage=1 00:17:07.308 --rc genhtml_legend=1 00:17:07.308 --rc geninfo_all_blocks=1 00:17:07.308 --rc geninfo_unexecuted_blocks=1 00:17:07.308 00:17:07.308 ' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:07.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.308 --rc genhtml_branch_coverage=1 00:17:07.308 --rc genhtml_function_coverage=1 00:17:07.308 --rc genhtml_legend=1 00:17:07.308 --rc geninfo_all_blocks=1 00:17:07.308 --rc geninfo_unexecuted_blocks=1 00:17:07.308 00:17:07.308 ' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:07.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.308 --rc genhtml_branch_coverage=1 00:17:07.308 --rc genhtml_function_coverage=1 00:17:07.308 --rc genhtml_legend=1 00:17:07.308 --rc geninfo_all_blocks=1 00:17:07.308 --rc geninfo_unexecuted_blocks=1 00:17:07.308 00:17:07.308 ' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.308 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:07.309 01:27:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.211 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:09.212 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:09.212 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:09.212 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:09.212 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:09.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:17:09.212 00:17:09.212 --- 10.0.0.2 ping statistics --- 00:17:09.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.212 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:17:09.212 00:17:09.212 --- 10.0.0.1 ping statistics --- 00:17:09.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.212 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.212 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1579726 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1579726 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1579726 ']' 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.213 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.213 [2024-10-13 01:27:54.748252] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:17:09.213 [2024-10-13 01:27:54.748325] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.471 [2024-10-13 01:27:54.811232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:09.471 [2024-10-13 01:27:54.857762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.471 [2024-10-13 01:27:54.857823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.471 [2024-10-13 01:27:54.857836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.471 [2024-10-13 01:27:54.857847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.471 [2024-10-13 01:27:54.857857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.471 [2024-10-13 01:27:54.859377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.471 [2024-10-13 01:27:54.859439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.471 [2024-10-13 01:27:54.859443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.471 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:09.471 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:17:09.471 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:09.471 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.471 01:27:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.471 [2024-10-13 01:27:55.012023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.471 [2024-10-13 01:27:55.029545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.471 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.472 NULL1 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1579869 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.472 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.730 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:09.988 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.988 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:09.988 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:09.988 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.988 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.246 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.246 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:10.246 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.246 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.246 01:27:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:10.503 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.503 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:10.503 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:10.503 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.503 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.068 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.068 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:11.068 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.068 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.068 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.326 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.326 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:11.326 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.326 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.326 01:27:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.583 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.583 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:11.583 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.583 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.584 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:11.841 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.841 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:11.841 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:11.841 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.841 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.099 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.099 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:12.099 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.099 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.099 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.664 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.664 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:12.664 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.664 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.664 01:27:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:12.921 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.921 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:12.921 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:12.921 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.921 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.178 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.178 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:13.178 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.178 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.178 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.436 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.436 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:13.436 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.436 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.437 01:27:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:13.694 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.694 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:13.694 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:13.694 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.694 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.259 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.259 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:14.259 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.259 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.259 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.517 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.517 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:14.517 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.517 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.517 01:27:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.775 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.775 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:14.775 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.775 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.775 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.033 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.033 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:15.033 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.033 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.033 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.598 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.598 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:15.598 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.598 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.598 01:28:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.856 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.856 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:15.856 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.856 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.856 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.113 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.113 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:16.113 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.113 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.113 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.370 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.370 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:16.370 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.370 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.370 01:28:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.627 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.627 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:16.627 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.627 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.627 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.228 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.228 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:17.228 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.228 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.228 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.513 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.513 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:17.513 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.513 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.513 01:28:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.770 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.770 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:17.770 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.770 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.770 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.027 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.027 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:18.027 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.027 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.027 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.285 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.285 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:18.285 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.285 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.285 01:28:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.543 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.543 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:18.543 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.543 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.543 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.107 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.107 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:19.107 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.107 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.107 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.365 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.365 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:19.365 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.365 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.365 01:28:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.622 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.622 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:19.622 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.623 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.623 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.623 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1579869 00:17:19.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1579869) - No such process 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1579869 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.880 rmmod nvme_tcp 00:17:19.880 rmmod nvme_fabrics 00:17:19.880 rmmod nvme_keyring 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1579726 ']' 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1579726 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1579726 ']' 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1579726 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:19.880 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1579726 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1579726' 00:17:20.139 killing process with pid 1579726 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1579726 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1579726 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:20.139 01:28:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:22.673 00:17:22.673 real 0m15.272s 00:17:22.673 user 0m38.333s 00:17:22.673 sys 0m5.924s 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.673 ************************************ 00:17:22.673 END TEST nvmf_connect_stress 00:17:22.673 ************************************ 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.673 ************************************ 00:17:22.673 START TEST nvmf_fused_ordering 00:17:22.673 ************************************ 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:22.673 * Looking for test storage... 00:17:22.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:22.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.673 --rc genhtml_branch_coverage=1 00:17:22.673 --rc genhtml_function_coverage=1 00:17:22.673 --rc genhtml_legend=1 00:17:22.673 --rc geninfo_all_blocks=1 00:17:22.673 --rc geninfo_unexecuted_blocks=1 00:17:22.673 00:17:22.673 ' 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:22.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.673 --rc genhtml_branch_coverage=1 00:17:22.673 --rc genhtml_function_coverage=1 00:17:22.673 --rc genhtml_legend=1 00:17:22.673 --rc geninfo_all_blocks=1 00:17:22.673 --rc geninfo_unexecuted_blocks=1 00:17:22.673 00:17:22.673 ' 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:22.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.673 --rc genhtml_branch_coverage=1 00:17:22.673 --rc genhtml_function_coverage=1 00:17:22.673 --rc genhtml_legend=1 00:17:22.673 --rc geninfo_all_blocks=1 00:17:22.673 --rc geninfo_unexecuted_blocks=1 00:17:22.673 00:17:22.673 ' 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:22.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.673 --rc genhtml_branch_coverage=1 00:17:22.673 --rc genhtml_function_coverage=1 00:17:22.673 --rc genhtml_legend=1 00:17:22.673 --rc geninfo_all_blocks=1 00:17:22.673 --rc geninfo_unexecuted_blocks=1 00:17:22.673 00:17:22.673 ' 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.673 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:22.674 01:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:24.575 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:24.576 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:24.576 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:24.576 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:24.576 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.576 01:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:24.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:17:24.576 00:17:24.576 --- 10.0.0.2 ping statistics --- 00:17:24.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.576 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:17:24.576 00:17:24.576 --- 10.0.0.1 ping statistics --- 00:17:24.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.576 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1583021 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:24.576 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1583021 00:17:24.577 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1583021 ']' 00:17:24.577 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.577 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.577 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.577 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.577 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.835 [2024-10-13 01:28:10.159283] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:17:24.835 [2024-10-13 01:28:10.159352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.835 [2024-10-13 01:28:10.222665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.835 [2024-10-13 01:28:10.268055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.835 [2024-10-13 01:28:10.268109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.835 [2024-10-13 01:28:10.268132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.835 [2024-10-13 01:28:10.268143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.835 [2024-10-13 01:28:10.268153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.835 [2024-10-13 01:28:10.268750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.835 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.835 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:17:24.835 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:24.835 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:24.835 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.835 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.835 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:24.835 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.835 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:24.835 [2024-10-13 01:28:10.413060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.094 [2024-10-13 01:28:10.429301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.094 NULL1 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.094 01:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:25.094 [2024-10-13 01:28:10.474434] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:17:25.094 [2024-10-13 01:28:10.474482] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583041 ] 00:17:25.355 Attached to nqn.2016-06.io.spdk:cnode1 00:17:25.355 Namespace ID: 1 size: 1GB 00:17:25.355 fused_ordering(0) 00:17:25.355 fused_ordering(1) 00:17:25.355 fused_ordering(2) 00:17:25.355 fused_ordering(3) 00:17:25.355 fused_ordering(4) 00:17:25.355 fused_ordering(5) 00:17:25.355 fused_ordering(6) 00:17:25.355 fused_ordering(7) 00:17:25.355 fused_ordering(8) 00:17:25.355 fused_ordering(9) 00:17:25.355 fused_ordering(10) 00:17:25.355 fused_ordering(11) 00:17:25.355 fused_ordering(12) 00:17:25.355 fused_ordering(13) 00:17:25.355 fused_ordering(14) 00:17:25.355 fused_ordering(15) 00:17:25.355 fused_ordering(16) 00:17:25.355 fused_ordering(17) 00:17:25.355 fused_ordering(18) 00:17:25.355 fused_ordering(19) 00:17:25.355 fused_ordering(20) 00:17:25.356 fused_ordering(21) 00:17:25.356 fused_ordering(22) 00:17:25.356 fused_ordering(23) 00:17:25.356 fused_ordering(24) 00:17:25.356 fused_ordering(25) 00:17:25.356 fused_ordering(26) 00:17:25.356 fused_ordering(27) 00:17:25.356 fused_ordering(28) 00:17:25.356 fused_ordering(29) 00:17:25.356 fused_ordering(30) 00:17:25.356 fused_ordering(31) 00:17:25.356 fused_ordering(32) 00:17:25.356 fused_ordering(33) 00:17:25.356 fused_ordering(34) 00:17:25.356 fused_ordering(35) 00:17:25.356 fused_ordering(36) 00:17:25.356 fused_ordering(37) 00:17:25.356 fused_ordering(38) 00:17:25.356 fused_ordering(39) 00:17:25.356 fused_ordering(40) 00:17:25.356 fused_ordering(41) 00:17:25.356 fused_ordering(42) 00:17:25.356 fused_ordering(43) 00:17:25.356 fused_ordering(44) 00:17:25.356 fused_ordering(45) 00:17:25.356 fused_ordering(46) 00:17:25.356 fused_ordering(47) 00:17:25.356 fused_ordering(48) 00:17:25.356 fused_ordering(49) 00:17:25.356 fused_ordering(50) 00:17:25.356 fused_ordering(51) 00:17:25.356 fused_ordering(52) 00:17:25.356 fused_ordering(53) 00:17:25.356 fused_ordering(54) 00:17:25.356 fused_ordering(55) 00:17:25.356 fused_ordering(56) 00:17:25.356 fused_ordering(57) 00:17:25.356 fused_ordering(58) 00:17:25.356 fused_ordering(59) 00:17:25.356 fused_ordering(60) 00:17:25.356 fused_ordering(61) 00:17:25.356 fused_ordering(62) 00:17:25.356 fused_ordering(63) 00:17:25.356 fused_ordering(64) 00:17:25.356 fused_ordering(65) 00:17:25.356 fused_ordering(66) 00:17:25.356 fused_ordering(67) 00:17:25.356 fused_ordering(68) 00:17:25.356 fused_ordering(69) 00:17:25.356 fused_ordering(70) 00:17:25.356 fused_ordering(71) 00:17:25.356 fused_ordering(72) 00:17:25.356 fused_ordering(73) 00:17:25.356 fused_ordering(74) 00:17:25.356 fused_ordering(75) 00:17:25.356 fused_ordering(76) 00:17:25.356 fused_ordering(77) 00:17:25.356 fused_ordering(78) 00:17:25.356 fused_ordering(79) 00:17:25.356 fused_ordering(80) 00:17:25.356 fused_ordering(81) 00:17:25.356 fused_ordering(82) 00:17:25.356 fused_ordering(83) 00:17:25.356 fused_ordering(84) 00:17:25.356 fused_ordering(85) 00:17:25.356 fused_ordering(86) 00:17:25.356 fused_ordering(87) 00:17:25.356 fused_ordering(88) 00:17:25.356 fused_ordering(89) 00:17:25.356 fused_ordering(90) 00:17:25.356 fused_ordering(91) 00:17:25.356 fused_ordering(92) 00:17:25.356 fused_ordering(93) 00:17:25.356 fused_ordering(94) 00:17:25.356 fused_ordering(95) 00:17:25.356 fused_ordering(96) 00:17:25.356 fused_ordering(97) 00:17:25.356 fused_ordering(98) 00:17:25.356 fused_ordering(99) 00:17:25.356 fused_ordering(100) 00:17:25.356 fused_ordering(101) 00:17:25.356 fused_ordering(102) 00:17:25.356 fused_ordering(103) 00:17:25.356 fused_ordering(104) 00:17:25.356 fused_ordering(105) 00:17:25.356 fused_ordering(106) 00:17:25.356 fused_ordering(107) 00:17:25.356 fused_ordering(108) 00:17:25.356 fused_ordering(109) 00:17:25.356 fused_ordering(110) 00:17:25.356 fused_ordering(111) 00:17:25.356 fused_ordering(112) 00:17:25.356 fused_ordering(113) 00:17:25.356 fused_ordering(114) 00:17:25.356 fused_ordering(115) 00:17:25.356 fused_ordering(116) 00:17:25.356 fused_ordering(117) 00:17:25.356 fused_ordering(118) 00:17:25.356 fused_ordering(119) 00:17:25.356 fused_ordering(120) 00:17:25.356 fused_ordering(121) 00:17:25.356 fused_ordering(122) 00:17:25.356 fused_ordering(123) 00:17:25.356 fused_ordering(124) 00:17:25.356 fused_ordering(125) 00:17:25.356 fused_ordering(126) 00:17:25.356 fused_ordering(127) 00:17:25.356 fused_ordering(128) 00:17:25.356 fused_ordering(129) 00:17:25.356 fused_ordering(130) 00:17:25.356 fused_ordering(131) 00:17:25.356 fused_ordering(132) 00:17:25.356 fused_ordering(133) 00:17:25.356 fused_ordering(134) 00:17:25.356 fused_ordering(135) 00:17:25.356 fused_ordering(136) 00:17:25.356 fused_ordering(137) 00:17:25.356 fused_ordering(138) 00:17:25.356 fused_ordering(139) 00:17:25.356 fused_ordering(140) 00:17:25.356 fused_ordering(141) 00:17:25.356 fused_ordering(142) 00:17:25.356 fused_ordering(143) 00:17:25.356 fused_ordering(144) 00:17:25.356 fused_ordering(145) 00:17:25.356 fused_ordering(146) 00:17:25.356 fused_ordering(147) 00:17:25.356 fused_ordering(148) 00:17:25.356 fused_ordering(149) 00:17:25.356 fused_ordering(150) 00:17:25.356 fused_ordering(151) 00:17:25.356 fused_ordering(152) 00:17:25.356 fused_ordering(153) 00:17:25.356 fused_ordering(154) 00:17:25.356 fused_ordering(155) 00:17:25.356 fused_ordering(156) 00:17:25.356 fused_ordering(157) 00:17:25.356 fused_ordering(158) 00:17:25.356 fused_ordering(159) 00:17:25.356 fused_ordering(160) 00:17:25.356 fused_ordering(161) 00:17:25.356 fused_ordering(162) 00:17:25.356 fused_ordering(163) 00:17:25.356 fused_ordering(164) 00:17:25.356 fused_ordering(165) 00:17:25.356 fused_ordering(166) 00:17:25.356 fused_ordering(167) 00:17:25.356 fused_ordering(168) 00:17:25.356 fused_ordering(169) 00:17:25.356 fused_ordering(170) 00:17:25.356 fused_ordering(171) 00:17:25.356 fused_ordering(172) 00:17:25.356 fused_ordering(173) 00:17:25.356 fused_ordering(174) 00:17:25.356 fused_ordering(175) 00:17:25.356 fused_ordering(176) 00:17:25.356 fused_ordering(177) 00:17:25.356 fused_ordering(178) 00:17:25.356 fused_ordering(179) 00:17:25.356 fused_ordering(180) 00:17:25.356 fused_ordering(181) 00:17:25.356 fused_ordering(182) 00:17:25.356 fused_ordering(183) 00:17:25.356 fused_ordering(184) 00:17:25.356 fused_ordering(185) 00:17:25.356 fused_ordering(186) 00:17:25.356 fused_ordering(187) 00:17:25.356 fused_ordering(188) 00:17:25.356 fused_ordering(189) 00:17:25.356 fused_ordering(190) 00:17:25.356 fused_ordering(191) 00:17:25.356 fused_ordering(192) 00:17:25.356 fused_ordering(193) 00:17:25.356 fused_ordering(194) 00:17:25.356 fused_ordering(195) 00:17:25.356 fused_ordering(196) 00:17:25.356 fused_ordering(197) 00:17:25.356 fused_ordering(198) 00:17:25.356 fused_ordering(199) 00:17:25.356 fused_ordering(200) 00:17:25.356 fused_ordering(201) 00:17:25.356 fused_ordering(202) 00:17:25.356 fused_ordering(203) 00:17:25.356 fused_ordering(204) 00:17:25.356 fused_ordering(205) 00:17:25.920 fused_ordering(206) 00:17:25.920 fused_ordering(207) 00:17:25.920 fused_ordering(208) 00:17:25.920 fused_ordering(209) 00:17:25.920 fused_ordering(210) 00:17:25.920 fused_ordering(211) 00:17:25.920 fused_ordering(212) 00:17:25.920 fused_ordering(213) 00:17:25.920 fused_ordering(214) 00:17:25.920 fused_ordering(215) 00:17:25.920 fused_ordering(216) 00:17:25.920 fused_ordering(217) 00:17:25.920 fused_ordering(218) 00:17:25.920 fused_ordering(219) 00:17:25.920 fused_ordering(220) 00:17:25.920 fused_ordering(221) 00:17:25.920 fused_ordering(222) 00:17:25.920 fused_ordering(223) 00:17:25.920 fused_ordering(224) 00:17:25.920 fused_ordering(225) 00:17:25.920 fused_ordering(226) 00:17:25.920 fused_ordering(227) 00:17:25.920 fused_ordering(228) 00:17:25.920 fused_ordering(229) 00:17:25.920 fused_ordering(230) 00:17:25.920 fused_ordering(231) 00:17:25.920 fused_ordering(232) 00:17:25.920 fused_ordering(233) 00:17:25.920 fused_ordering(234) 00:17:25.920 fused_ordering(235) 00:17:25.920 fused_ordering(236) 00:17:25.920 fused_ordering(237) 00:17:25.920 fused_ordering(238) 00:17:25.920 fused_ordering(239) 00:17:25.920 fused_ordering(240) 00:17:25.920 fused_ordering(241) 00:17:25.920 fused_ordering(242) 00:17:25.920 fused_ordering(243) 00:17:25.920 fused_ordering(244) 00:17:25.920 fused_ordering(245) 00:17:25.920 fused_ordering(246) 00:17:25.920 fused_ordering(247) 00:17:25.920 fused_ordering(248) 00:17:25.920 fused_ordering(249) 00:17:25.920 fused_ordering(250) 00:17:25.920 fused_ordering(251) 00:17:25.920 fused_ordering(252) 00:17:25.920 fused_ordering(253) 00:17:25.920 fused_ordering(254) 00:17:25.920 fused_ordering(255) 00:17:25.920 fused_ordering(256) 00:17:25.920 fused_ordering(257) 00:17:25.920 fused_ordering(258) 00:17:25.920 fused_ordering(259) 00:17:25.920 fused_ordering(260) 00:17:25.920 fused_ordering(261) 00:17:25.920 fused_ordering(262) 00:17:25.920 fused_ordering(263) 00:17:25.920 fused_ordering(264) 00:17:25.920 fused_ordering(265) 00:17:25.920 fused_ordering(266) 00:17:25.920 fused_ordering(267) 00:17:25.920 fused_ordering(268) 00:17:25.920 fused_ordering(269) 00:17:25.920 fused_ordering(270) 00:17:25.920 fused_ordering(271) 00:17:25.920 fused_ordering(272) 00:17:25.920 fused_ordering(273) 00:17:25.920 fused_ordering(274) 00:17:25.920 fused_ordering(275) 00:17:25.920 fused_ordering(276) 00:17:25.920 fused_ordering(277) 00:17:25.920 fused_ordering(278) 00:17:25.920 fused_ordering(279) 00:17:25.920 fused_ordering(280) 00:17:25.920 fused_ordering(281) 00:17:25.920 fused_ordering(282) 00:17:25.920 fused_ordering(283) 00:17:25.920 fused_ordering(284) 00:17:25.920 fused_ordering(285) 00:17:25.920 fused_ordering(286) 00:17:25.920 fused_ordering(287) 00:17:25.920 fused_ordering(288) 00:17:25.920 fused_ordering(289) 00:17:25.920 fused_ordering(290) 00:17:25.920 fused_ordering(291) 00:17:25.920 fused_ordering(292) 00:17:25.920 fused_ordering(293) 00:17:25.920 fused_ordering(294) 00:17:25.920 fused_ordering(295) 00:17:25.920 fused_ordering(296) 00:17:25.920 fused_ordering(297) 00:17:25.920 fused_ordering(298) 00:17:25.920 fused_ordering(299) 00:17:25.920 fused_ordering(300) 00:17:25.920 fused_ordering(301) 00:17:25.920 fused_ordering(302) 00:17:25.920 fused_ordering(303) 00:17:25.920 fused_ordering(304) 00:17:25.920 fused_ordering(305) 00:17:25.920 fused_ordering(306) 00:17:25.920 fused_ordering(307) 00:17:25.920 fused_ordering(308) 00:17:25.920 fused_ordering(309) 00:17:25.920 fused_ordering(310) 00:17:25.920 fused_ordering(311) 00:17:25.920 fused_ordering(312) 00:17:25.920 fused_ordering(313) 00:17:25.920 fused_ordering(314) 00:17:25.920 fused_ordering(315) 00:17:25.920 fused_ordering(316) 00:17:25.920 fused_ordering(317) 00:17:25.920 fused_ordering(318) 00:17:25.920 fused_ordering(319) 00:17:25.920 fused_ordering(320) 00:17:25.920 fused_ordering(321) 00:17:25.920 fused_ordering(322) 00:17:25.920 fused_ordering(323) 00:17:25.920 fused_ordering(324) 00:17:25.920 fused_ordering(325) 00:17:25.920 fused_ordering(326) 00:17:25.920 fused_ordering(327) 00:17:25.920 fused_ordering(328) 00:17:25.920 fused_ordering(329) 00:17:25.920 fused_ordering(330) 00:17:25.920 fused_ordering(331) 00:17:25.920 fused_ordering(332) 00:17:25.920 fused_ordering(333) 00:17:25.920 fused_ordering(334) 00:17:25.920 fused_ordering(335) 00:17:25.920 fused_ordering(336) 00:17:25.920 fused_ordering(337) 00:17:25.920 fused_ordering(338) 00:17:25.920 fused_ordering(339) 00:17:25.920 fused_ordering(340) 00:17:25.920 fused_ordering(341) 00:17:25.920 fused_ordering(342) 00:17:25.920 fused_ordering(343) 00:17:25.920 fused_ordering(344) 00:17:25.920 fused_ordering(345) 00:17:25.920 fused_ordering(346) 00:17:25.920 fused_ordering(347) 00:17:25.920 fused_ordering(348) 00:17:25.920 fused_ordering(349) 00:17:25.920 fused_ordering(350) 00:17:25.920 fused_ordering(351) 00:17:25.920 fused_ordering(352) 00:17:25.920 fused_ordering(353) 00:17:25.920 fused_ordering(354) 00:17:25.920 fused_ordering(355) 00:17:25.921 fused_ordering(356) 00:17:25.921 fused_ordering(357) 00:17:25.921 fused_ordering(358) 00:17:25.921 fused_ordering(359) 00:17:25.921 fused_ordering(360) 00:17:25.921 fused_ordering(361) 00:17:25.921 fused_ordering(362) 00:17:25.921 fused_ordering(363) 00:17:25.921 fused_ordering(364) 00:17:25.921 fused_ordering(365) 00:17:25.921 fused_ordering(366) 00:17:25.921 fused_ordering(367) 00:17:25.921 fused_ordering(368) 00:17:25.921 fused_ordering(369) 00:17:25.921 fused_ordering(370) 00:17:25.921 fused_ordering(371) 00:17:25.921 fused_ordering(372) 00:17:25.921 fused_ordering(373) 00:17:25.921 fused_ordering(374) 00:17:25.921 fused_ordering(375) 00:17:25.921 fused_ordering(376) 00:17:25.921 fused_ordering(377) 00:17:25.921 fused_ordering(378) 00:17:25.921 fused_ordering(379) 00:17:25.921 fused_ordering(380) 00:17:25.921 fused_ordering(381) 00:17:25.921 fused_ordering(382) 00:17:25.921 fused_ordering(383) 00:17:25.921 fused_ordering(384) 00:17:25.921 fused_ordering(385) 00:17:25.921 fused_ordering(386) 00:17:25.921 fused_ordering(387) 00:17:25.921 fused_ordering(388) 00:17:25.921 fused_ordering(389) 00:17:25.921 fused_ordering(390) 00:17:25.921 fused_ordering(391) 00:17:25.921 fused_ordering(392) 00:17:25.921 fused_ordering(393) 00:17:25.921 fused_ordering(394) 00:17:25.921 fused_ordering(395) 00:17:25.921 fused_ordering(396) 00:17:25.921 fused_ordering(397) 00:17:25.921 fused_ordering(398) 00:17:25.921 fused_ordering(399) 00:17:25.921 fused_ordering(400) 00:17:25.921 fused_ordering(401) 00:17:25.921 fused_ordering(402) 00:17:25.921 fused_ordering(403) 00:17:25.921 fused_ordering(404) 00:17:25.921 fused_ordering(405) 00:17:25.921 fused_ordering(406) 00:17:25.921 fused_ordering(407) 00:17:25.921 fused_ordering(408) 00:17:25.921 fused_ordering(409) 00:17:25.921 fused_ordering(410) 00:17:26.179 fused_ordering(411) 00:17:26.179 fused_ordering(412) 00:17:26.179 fused_ordering(413) 00:17:26.179 fused_ordering(414) 00:17:26.179 fused_ordering(415) 00:17:26.179 fused_ordering(416) 00:17:26.179 fused_ordering(417) 00:17:26.179 fused_ordering(418) 00:17:26.179 fused_ordering(419) 00:17:26.179 fused_ordering(420) 00:17:26.179 fused_ordering(421) 00:17:26.179 fused_ordering(422) 00:17:26.179 fused_ordering(423) 00:17:26.179 fused_ordering(424) 00:17:26.179 fused_ordering(425) 00:17:26.179 fused_ordering(426) 00:17:26.179 fused_ordering(427) 00:17:26.179 fused_ordering(428) 00:17:26.179 fused_ordering(429) 00:17:26.179 fused_ordering(430) 00:17:26.179 fused_ordering(431) 00:17:26.179 fused_ordering(432) 00:17:26.179 fused_ordering(433) 00:17:26.179 fused_ordering(434) 00:17:26.179 fused_ordering(435) 00:17:26.179 fused_ordering(436) 00:17:26.179 fused_ordering(437) 00:17:26.179 fused_ordering(438) 00:17:26.179 fused_ordering(439) 00:17:26.179 fused_ordering(440) 00:17:26.179 fused_ordering(441) 00:17:26.179 fused_ordering(442) 00:17:26.179 fused_ordering(443) 00:17:26.179 fused_ordering(444) 00:17:26.179 fused_ordering(445) 00:17:26.179 fused_ordering(446) 00:17:26.179 fused_ordering(447) 00:17:26.179 fused_ordering(448) 00:17:26.179 fused_ordering(449) 00:17:26.179 fused_ordering(450) 00:17:26.179 fused_ordering(451) 00:17:26.179 fused_ordering(452) 00:17:26.179 fused_ordering(453) 00:17:26.179 fused_ordering(454) 00:17:26.179 fused_ordering(455) 00:17:26.179 fused_ordering(456) 00:17:26.179 fused_ordering(457) 00:17:26.179 fused_ordering(458) 00:17:26.179 fused_ordering(459) 00:17:26.179 fused_ordering(460) 00:17:26.179 fused_ordering(461) 00:17:26.179 fused_ordering(462) 00:17:26.179 fused_ordering(463) 00:17:26.179 fused_ordering(464) 00:17:26.179 fused_ordering(465) 00:17:26.179 fused_ordering(466) 00:17:26.179 fused_ordering(467) 00:17:26.179 fused_ordering(468) 00:17:26.179 fused_ordering(469) 00:17:26.179 fused_ordering(470) 00:17:26.179 fused_ordering(471) 00:17:26.179 fused_ordering(472) 00:17:26.179 fused_ordering(473) 00:17:26.179 fused_ordering(474) 00:17:26.179 fused_ordering(475) 00:17:26.179 fused_ordering(476) 00:17:26.179 fused_ordering(477) 00:17:26.179 fused_ordering(478) 00:17:26.179 fused_ordering(479) 00:17:26.179 fused_ordering(480) 00:17:26.179 fused_ordering(481) 00:17:26.179 fused_ordering(482) 00:17:26.179 fused_ordering(483) 00:17:26.179 fused_ordering(484) 00:17:26.179 fused_ordering(485) 00:17:26.179 fused_ordering(486) 00:17:26.179 fused_ordering(487) 00:17:26.179 fused_ordering(488) 00:17:26.179 fused_ordering(489) 00:17:26.179 fused_ordering(490) 00:17:26.179 fused_ordering(491) 00:17:26.179 fused_ordering(492) 00:17:26.179 fused_ordering(493) 00:17:26.179 fused_ordering(494) 00:17:26.179 fused_ordering(495) 00:17:26.179 fused_ordering(496) 00:17:26.179 fused_ordering(497) 00:17:26.179 fused_ordering(498) 00:17:26.179 fused_ordering(499) 00:17:26.179 fused_ordering(500) 00:17:26.179 fused_ordering(501) 00:17:26.179 fused_ordering(502) 00:17:26.179 fused_ordering(503) 00:17:26.179 fused_ordering(504) 00:17:26.179 fused_ordering(505) 00:17:26.179 fused_ordering(506) 00:17:26.179 fused_ordering(507) 00:17:26.179 fused_ordering(508) 00:17:26.179 fused_ordering(509) 00:17:26.179 fused_ordering(510) 00:17:26.179 fused_ordering(511) 00:17:26.179 fused_ordering(512) 00:17:26.179 fused_ordering(513) 00:17:26.179 fused_ordering(514) 00:17:26.179 fused_ordering(515) 00:17:26.179 fused_ordering(516) 00:17:26.179 fused_ordering(517) 00:17:26.179 fused_ordering(518) 00:17:26.179 fused_ordering(519) 00:17:26.179 fused_ordering(520) 00:17:26.179 fused_ordering(521) 00:17:26.179 fused_ordering(522) 00:17:26.179 fused_ordering(523) 00:17:26.179 fused_ordering(524) 00:17:26.179 fused_ordering(525) 00:17:26.179 fused_ordering(526) 00:17:26.179 fused_ordering(527) 00:17:26.179 fused_ordering(528) 00:17:26.179 fused_ordering(529) 00:17:26.179 fused_ordering(530) 00:17:26.179 fused_ordering(531) 00:17:26.179 fused_ordering(532) 00:17:26.179 fused_ordering(533) 00:17:26.179 fused_ordering(534) 00:17:26.179 fused_ordering(535) 00:17:26.179 fused_ordering(536) 00:17:26.179 fused_ordering(537) 00:17:26.179 fused_ordering(538) 00:17:26.179 fused_ordering(539) 00:17:26.179 fused_ordering(540) 00:17:26.179 fused_ordering(541) 00:17:26.179 fused_ordering(542) 00:17:26.179 fused_ordering(543) 00:17:26.179 fused_ordering(544) 00:17:26.179 fused_ordering(545) 00:17:26.179 fused_ordering(546) 00:17:26.179 fused_ordering(547) 00:17:26.179 fused_ordering(548) 00:17:26.179 fused_ordering(549) 00:17:26.179 fused_ordering(550) 00:17:26.179 fused_ordering(551) 00:17:26.179 fused_ordering(552) 00:17:26.179 fused_ordering(553) 00:17:26.179 fused_ordering(554) 00:17:26.179 fused_ordering(555) 00:17:26.179 fused_ordering(556) 00:17:26.179 fused_ordering(557) 00:17:26.179 fused_ordering(558) 00:17:26.179 fused_ordering(559) 00:17:26.179 fused_ordering(560) 00:17:26.179 fused_ordering(561) 00:17:26.179 fused_ordering(562) 00:17:26.179 fused_ordering(563) 00:17:26.179 fused_ordering(564) 00:17:26.179 fused_ordering(565) 00:17:26.179 fused_ordering(566) 00:17:26.179 fused_ordering(567) 00:17:26.179 fused_ordering(568) 00:17:26.179 fused_ordering(569) 00:17:26.179 fused_ordering(570) 00:17:26.179 fused_ordering(571) 00:17:26.179 fused_ordering(572) 00:17:26.179 fused_ordering(573) 00:17:26.179 fused_ordering(574) 00:17:26.179 fused_ordering(575) 00:17:26.179 fused_ordering(576) 00:17:26.179 fused_ordering(577) 00:17:26.179 fused_ordering(578) 00:17:26.179 fused_ordering(579) 00:17:26.179 fused_ordering(580) 00:17:26.179 fused_ordering(581) 00:17:26.179 fused_ordering(582) 00:17:26.179 fused_ordering(583) 00:17:26.179 fused_ordering(584) 00:17:26.179 fused_ordering(585) 00:17:26.179 fused_ordering(586) 00:17:26.179 fused_ordering(587) 00:17:26.179 fused_ordering(588) 00:17:26.179 fused_ordering(589) 00:17:26.179 fused_ordering(590) 00:17:26.179 fused_ordering(591) 00:17:26.179 fused_ordering(592) 00:17:26.179 fused_ordering(593) 00:17:26.179 fused_ordering(594) 00:17:26.179 fused_ordering(595) 00:17:26.179 fused_ordering(596) 00:17:26.179 fused_ordering(597) 00:17:26.179 fused_ordering(598) 00:17:26.179 fused_ordering(599) 00:17:26.179 fused_ordering(600) 00:17:26.179 fused_ordering(601) 00:17:26.179 fused_ordering(602) 00:17:26.179 fused_ordering(603) 00:17:26.179 fused_ordering(604) 00:17:26.179 fused_ordering(605) 00:17:26.179 fused_ordering(606) 00:17:26.179 fused_ordering(607) 00:17:26.179 fused_ordering(608) 00:17:26.179 fused_ordering(609) 00:17:26.179 fused_ordering(610) 00:17:26.179 fused_ordering(611) 00:17:26.179 fused_ordering(612) 00:17:26.179 fused_ordering(613) 00:17:26.179 fused_ordering(614) 00:17:26.179 fused_ordering(615) 00:17:26.745 fused_ordering(616) 00:17:26.745 fused_ordering(617) 00:17:26.745 fused_ordering(618) 00:17:26.745 fused_ordering(619) 00:17:26.745 fused_ordering(620) 00:17:26.745 fused_ordering(621) 00:17:26.745 fused_ordering(622) 00:17:26.745 fused_ordering(623) 00:17:26.745 fused_ordering(624) 00:17:26.745 fused_ordering(625) 00:17:26.745 fused_ordering(626) 00:17:26.745 fused_ordering(627) 00:17:26.745 fused_ordering(628) 00:17:26.745 fused_ordering(629) 00:17:26.745 fused_ordering(630) 00:17:26.745 fused_ordering(631) 00:17:26.745 fused_ordering(632) 00:17:26.745 fused_ordering(633) 00:17:26.745 fused_ordering(634) 00:17:26.745 fused_ordering(635) 00:17:26.745 fused_ordering(636) 00:17:26.745 fused_ordering(637) 00:17:26.745 fused_ordering(638) 00:17:26.745 fused_ordering(639) 00:17:26.745 fused_ordering(640) 00:17:26.745 fused_ordering(641) 00:17:26.745 fused_ordering(642) 00:17:26.745 fused_ordering(643) 00:17:26.745 fused_ordering(644) 00:17:26.745 fused_ordering(645) 00:17:26.745 fused_ordering(646) 00:17:26.745 fused_ordering(647) 00:17:26.745 fused_ordering(648) 00:17:26.745 fused_ordering(649) 00:17:26.745 fused_ordering(650) 00:17:26.745 fused_ordering(651) 00:17:26.745 fused_ordering(652) 00:17:26.745 fused_ordering(653) 00:17:26.745 fused_ordering(654) 00:17:26.745 fused_ordering(655) 00:17:26.745 fused_ordering(656) 00:17:26.745 fused_ordering(657) 00:17:26.745 fused_ordering(658) 00:17:26.745 fused_ordering(659) 00:17:26.745 fused_ordering(660) 00:17:26.745 fused_ordering(661) 00:17:26.745 fused_ordering(662) 00:17:26.745 fused_ordering(663) 00:17:26.745 fused_ordering(664) 00:17:26.745 fused_ordering(665) 00:17:26.745 fused_ordering(666) 00:17:26.745 fused_ordering(667) 00:17:26.745 fused_ordering(668) 00:17:26.745 fused_ordering(669) 00:17:26.745 fused_ordering(670) 00:17:26.745 fused_ordering(671) 00:17:26.745 fused_ordering(672) 00:17:26.745 fused_ordering(673) 00:17:26.745 fused_ordering(674) 00:17:26.745 fused_ordering(675) 00:17:26.745 fused_ordering(676) 00:17:26.745 fused_ordering(677) 00:17:26.745 fused_ordering(678) 00:17:26.745 fused_ordering(679) 00:17:26.745 fused_ordering(680) 00:17:26.745 fused_ordering(681) 00:17:26.745 fused_ordering(682) 00:17:26.745 fused_ordering(683) 00:17:26.745 fused_ordering(684) 00:17:26.745 fused_ordering(685) 00:17:26.745 fused_ordering(686) 00:17:26.745 fused_ordering(687) 00:17:26.745 fused_ordering(688) 00:17:26.745 fused_ordering(689) 00:17:26.745 fused_ordering(690) 00:17:26.745 fused_ordering(691) 00:17:26.745 fused_ordering(692) 00:17:26.745 fused_ordering(693) 00:17:26.745 fused_ordering(694) 00:17:26.745 fused_ordering(695) 00:17:26.745 fused_ordering(696) 00:17:26.745 fused_ordering(697) 00:17:26.745 fused_ordering(698) 00:17:26.745 fused_ordering(699) 00:17:26.745 fused_ordering(700) 00:17:26.745 fused_ordering(701) 00:17:26.745 fused_ordering(702) 00:17:26.745 fused_ordering(703) 00:17:26.745 fused_ordering(704) 00:17:26.745 fused_ordering(705) 00:17:26.745 fused_ordering(706) 00:17:26.745 fused_ordering(707) 00:17:26.745 fused_ordering(708) 00:17:26.745 fused_ordering(709) 00:17:26.745 fused_ordering(710) 00:17:26.745 fused_ordering(711) 00:17:26.745 fused_ordering(712) 00:17:26.745 fused_ordering(713) 00:17:26.745 fused_ordering(714) 00:17:26.745 fused_ordering(715) 00:17:26.745 fused_ordering(716) 00:17:26.745 fused_ordering(717) 00:17:26.745 fused_ordering(718) 00:17:26.745 fused_ordering(719) 00:17:26.745 fused_ordering(720) 00:17:26.745 fused_ordering(721) 00:17:26.745 fused_ordering(722) 00:17:26.745 fused_ordering(723) 00:17:26.745 fused_ordering(724) 00:17:26.745 fused_ordering(725) 00:17:26.745 fused_ordering(726) 00:17:26.745 fused_ordering(727) 00:17:26.745 fused_ordering(728) 00:17:26.745 fused_ordering(729) 00:17:26.745 fused_ordering(730) 00:17:26.745 fused_ordering(731) 00:17:26.745 fused_ordering(732) 00:17:26.745 fused_ordering(733) 00:17:26.745 fused_ordering(734) 00:17:26.745 fused_ordering(735) 00:17:26.745 fused_ordering(736) 00:17:26.745 fused_ordering(737) 00:17:26.745 fused_ordering(738) 00:17:26.745 fused_ordering(739) 00:17:26.745 fused_ordering(740) 00:17:26.745 fused_ordering(741) 00:17:26.745 fused_ordering(742) 00:17:26.745 fused_ordering(743) 00:17:26.745 fused_ordering(744) 00:17:26.745 fused_ordering(745) 00:17:26.745 fused_ordering(746) 00:17:26.745 fused_ordering(747) 00:17:26.745 fused_ordering(748) 00:17:26.745 fused_ordering(749) 00:17:26.745 fused_ordering(750) 00:17:26.745 fused_ordering(751) 00:17:26.745 fused_ordering(752) 00:17:26.745 fused_ordering(753) 00:17:26.745 fused_ordering(754) 00:17:26.745 fused_ordering(755) 00:17:26.745 fused_ordering(756) 00:17:26.745 fused_ordering(757) 00:17:26.745 fused_ordering(758) 00:17:26.745 fused_ordering(759) 00:17:26.745 fused_ordering(760) 00:17:26.745 fused_ordering(761) 00:17:26.745 fused_ordering(762) 00:17:26.746 fused_ordering(763) 00:17:26.746 fused_ordering(764) 00:17:26.746 fused_ordering(765) 00:17:26.746 fused_ordering(766) 00:17:26.746 fused_ordering(767) 00:17:26.746 fused_ordering(768) 00:17:26.746 fused_ordering(769) 00:17:26.746 fused_ordering(770) 00:17:26.746 fused_ordering(771) 00:17:26.746 fused_ordering(772) 00:17:26.746 fused_ordering(773) 00:17:26.746 fused_ordering(774) 00:17:26.746 fused_ordering(775) 00:17:26.746 fused_ordering(776) 00:17:26.746 fused_ordering(777) 00:17:26.746 fused_ordering(778) 00:17:26.746 fused_ordering(779) 00:17:26.746 fused_ordering(780) 00:17:26.746 fused_ordering(781) 00:17:26.746 fused_ordering(782) 00:17:26.746 fused_ordering(783) 00:17:26.746 fused_ordering(784) 00:17:26.746 fused_ordering(785) 00:17:26.746 fused_ordering(786) 00:17:26.746 fused_ordering(787) 00:17:26.746 fused_ordering(788) 00:17:26.746 fused_ordering(789) 00:17:26.746 fused_ordering(790) 00:17:26.746 fused_ordering(791) 00:17:26.746 fused_ordering(792) 00:17:26.746 fused_ordering(793) 00:17:26.746 fused_ordering(794) 00:17:26.746 fused_ordering(795) 00:17:26.746 fused_ordering(796) 00:17:26.746 fused_ordering(797) 00:17:26.746 fused_ordering(798) 00:17:26.746 fused_ordering(799) 00:17:26.746 fused_ordering(800) 00:17:26.746 fused_ordering(801) 00:17:26.746 fused_ordering(802) 00:17:26.746 fused_ordering(803) 00:17:26.746 fused_ordering(804) 00:17:26.746 fused_ordering(805) 00:17:26.746 fused_ordering(806) 00:17:26.746 fused_ordering(807) 00:17:26.746 fused_ordering(808) 00:17:26.746 fused_ordering(809) 00:17:26.746 fused_ordering(810) 00:17:26.746 fused_ordering(811) 00:17:26.746 fused_ordering(812) 00:17:26.746 fused_ordering(813) 00:17:26.746 fused_ordering(814) 00:17:26.746 fused_ordering(815) 00:17:26.746 fused_ordering(816) 00:17:26.746 fused_ordering(817) 00:17:26.746 fused_ordering(818) 00:17:26.746 fused_ordering(819) 00:17:26.746 fused_ordering(820) 00:17:27.680 fused_ordering(821) 00:17:27.680 fused_ordering(822) 00:17:27.680 fused_ordering(823) 00:17:27.680 fused_ordering(824) 00:17:27.680 fused_ordering(825) 00:17:27.680 fused_ordering(826) 00:17:27.680 fused_ordering(827) 00:17:27.680 fused_ordering(828) 00:17:27.680 fused_ordering(829) 00:17:27.680 fused_ordering(830) 00:17:27.680 fused_ordering(831) 00:17:27.680 fused_ordering(832) 00:17:27.680 fused_ordering(833) 00:17:27.680 fused_ordering(834) 00:17:27.680 fused_ordering(835) 00:17:27.680 fused_ordering(836) 00:17:27.680 fused_ordering(837) 00:17:27.680 fused_ordering(838) 00:17:27.680 fused_ordering(839) 00:17:27.680 fused_ordering(840) 00:17:27.680 fused_ordering(841) 00:17:27.680 fused_ordering(842) 00:17:27.680 fused_ordering(843) 00:17:27.680 fused_ordering(844) 00:17:27.680 fused_ordering(845) 00:17:27.680 fused_ordering(846) 00:17:27.680 fused_ordering(847) 00:17:27.680 fused_ordering(848) 00:17:27.680 fused_ordering(849) 00:17:27.680 fused_ordering(850) 00:17:27.680 fused_ordering(851) 00:17:27.680 fused_ordering(852) 00:17:27.680 fused_ordering(853) 00:17:27.680 fused_ordering(854) 00:17:27.680 fused_ordering(855) 00:17:27.680 fused_ordering(856) 00:17:27.680 fused_ordering(857) 00:17:27.680 fused_ordering(858) 00:17:27.680 fused_ordering(859) 00:17:27.680 fused_ordering(860) 00:17:27.680 fused_ordering(861) 00:17:27.680 fused_ordering(862) 00:17:27.680 fused_ordering(863) 00:17:27.680 fused_ordering(864) 00:17:27.680 fused_ordering(865) 00:17:27.680 fused_ordering(866) 00:17:27.680 fused_ordering(867) 00:17:27.680 fused_ordering(868) 00:17:27.680 fused_ordering(869) 00:17:27.680 fused_ordering(870) 00:17:27.680 fused_ordering(871) 00:17:27.680 fused_ordering(872) 00:17:27.680 fused_ordering(873) 00:17:27.680 fused_ordering(874) 00:17:27.680 fused_ordering(875) 00:17:27.680 fused_ordering(876) 00:17:27.680 fused_ordering(877) 00:17:27.680 fused_ordering(878) 00:17:27.680 fused_ordering(879) 00:17:27.680 fused_ordering(880) 00:17:27.680 fused_ordering(881) 00:17:27.680 fused_ordering(882) 00:17:27.680 fused_ordering(883) 00:17:27.680 fused_ordering(884) 00:17:27.680 fused_ordering(885) 00:17:27.680 fused_ordering(886) 00:17:27.680 fused_ordering(887) 00:17:27.680 fused_ordering(888) 00:17:27.680 fused_ordering(889) 00:17:27.680 fused_ordering(890) 00:17:27.680 fused_ordering(891) 00:17:27.680 fused_ordering(892) 00:17:27.680 fused_ordering(893) 00:17:27.680 fused_ordering(894) 00:17:27.680 fused_ordering(895) 00:17:27.680 fused_ordering(896) 00:17:27.680 fused_ordering(897) 00:17:27.680 fused_ordering(898) 00:17:27.680 fused_ordering(899) 00:17:27.680 fused_ordering(900) 00:17:27.680 fused_ordering(901) 00:17:27.680 fused_ordering(902) 00:17:27.681 fused_ordering(903) 00:17:27.681 fused_ordering(904) 00:17:27.681 fused_ordering(905) 00:17:27.681 fused_ordering(906) 00:17:27.681 fused_ordering(907) 00:17:27.681 fused_ordering(908) 00:17:27.681 fused_ordering(909) 00:17:27.681 fused_ordering(910) 00:17:27.681 fused_ordering(911) 00:17:27.681 fused_ordering(912) 00:17:27.681 fused_ordering(913) 00:17:27.681 fused_ordering(914) 00:17:27.681 fused_ordering(915) 00:17:27.681 fused_ordering(916) 00:17:27.681 fused_ordering(917) 00:17:27.681 fused_ordering(918) 00:17:27.681 fused_ordering(919) 00:17:27.681 fused_ordering(920) 00:17:27.681 fused_ordering(921) 00:17:27.681 fused_ordering(922) 00:17:27.681 fused_ordering(923) 00:17:27.681 fused_ordering(924) 00:17:27.681 fused_ordering(925) 00:17:27.681 fused_ordering(926) 00:17:27.681 fused_ordering(927) 00:17:27.681 fused_ordering(928) 00:17:27.681 fused_ordering(929) 00:17:27.681 fused_ordering(930) 00:17:27.681 fused_ordering(931) 00:17:27.681 fused_ordering(932) 00:17:27.681 fused_ordering(933) 00:17:27.681 fused_ordering(934) 00:17:27.681 fused_ordering(935) 00:17:27.681 fused_ordering(936) 00:17:27.681 fused_ordering(937) 00:17:27.681 fused_ordering(938) 00:17:27.681 fused_ordering(939) 00:17:27.681 fused_ordering(940) 00:17:27.681 fused_ordering(941) 00:17:27.681 fused_ordering(942) 00:17:27.681 fused_ordering(943) 00:17:27.681 fused_ordering(944) 00:17:27.681 fused_ordering(945) 00:17:27.681 fused_ordering(946) 00:17:27.681 fused_ordering(947) 00:17:27.681 fused_ordering(948) 00:17:27.681 fused_ordering(949) 00:17:27.681 fused_ordering(950) 00:17:27.681 fused_ordering(951) 00:17:27.681 fused_ordering(952) 00:17:27.681 fused_ordering(953) 00:17:27.681 fused_ordering(954) 00:17:27.681 fused_ordering(955) 00:17:27.681 fused_ordering(956) 00:17:27.681 fused_ordering(957) 00:17:27.681 fused_ordering(958) 00:17:27.681 fused_ordering(959) 00:17:27.681 fused_ordering(960) 00:17:27.681 fused_ordering(961) 00:17:27.681 fused_ordering(962) 00:17:27.681 fused_ordering(963) 00:17:27.681 fused_ordering(964) 00:17:27.681 fused_ordering(965) 00:17:27.681 fused_ordering(966) 00:17:27.681 fused_ordering(967) 00:17:27.681 fused_ordering(968) 00:17:27.681 fused_ordering(969) 00:17:27.681 fused_ordering(970) 00:17:27.681 fused_ordering(971) 00:17:27.681 fused_ordering(972) 00:17:27.681 fused_ordering(973) 00:17:27.681 fused_ordering(974) 00:17:27.681 fused_ordering(975) 00:17:27.681 fused_ordering(976) 00:17:27.681 fused_ordering(977) 00:17:27.681 fused_ordering(978) 00:17:27.681 fused_ordering(979) 00:17:27.681 fused_ordering(980) 00:17:27.681 fused_ordering(981) 00:17:27.681 fused_ordering(982) 00:17:27.681 fused_ordering(983) 00:17:27.681 fused_ordering(984) 00:17:27.681 fused_ordering(985) 00:17:27.681 fused_ordering(986) 00:17:27.681 fused_ordering(987) 00:17:27.681 fused_ordering(988) 00:17:27.681 fused_ordering(989) 00:17:27.681 fused_ordering(990) 00:17:27.681 fused_ordering(991) 00:17:27.681 fused_ordering(992) 00:17:27.681 fused_ordering(993) 00:17:27.681 fused_ordering(994) 00:17:27.681 fused_ordering(995) 00:17:27.681 fused_ordering(996) 00:17:27.681 fused_ordering(997) 00:17:27.681 fused_ordering(998) 00:17:27.681 fused_ordering(999) 00:17:27.681 fused_ordering(1000) 00:17:27.681 fused_ordering(1001) 00:17:27.681 fused_ordering(1002) 00:17:27.681 fused_ordering(1003) 00:17:27.681 fused_ordering(1004) 00:17:27.681 fused_ordering(1005) 00:17:27.681 fused_ordering(1006) 00:17:27.681 fused_ordering(1007) 00:17:27.681 fused_ordering(1008) 00:17:27.681 fused_ordering(1009) 00:17:27.681 fused_ordering(1010) 00:17:27.681 fused_ordering(1011) 00:17:27.681 fused_ordering(1012) 00:17:27.681 fused_ordering(1013) 00:17:27.681 fused_ordering(1014) 00:17:27.681 fused_ordering(1015) 00:17:27.681 fused_ordering(1016) 00:17:27.681 fused_ordering(1017) 00:17:27.681 fused_ordering(1018) 00:17:27.681 fused_ordering(1019) 00:17:27.681 fused_ordering(1020) 00:17:27.681 fused_ordering(1021) 00:17:27.681 fused_ordering(1022) 00:17:27.681 fused_ordering(1023) 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.681 rmmod nvme_tcp 00:17:27.681 rmmod nvme_fabrics 00:17:27.681 rmmod nvme_keyring 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1583021 ']' 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1583021 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1583021 ']' 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1583021 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.681 01:28:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583021 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583021' 00:17:27.681 killing process with pid 1583021 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1583021 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1583021 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.681 01:28:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:30.224 00:17:30.224 real 0m7.489s 00:17:30.224 user 0m5.147s 00:17:30.224 sys 0m3.131s 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:30.224 ************************************ 00:17:30.224 END TEST nvmf_fused_ordering 00:17:30.224 ************************************ 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:30.224 ************************************ 00:17:30.224 START TEST nvmf_ns_masking 00:17:30.224 ************************************ 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:30.224 * Looking for test storage... 00:17:30.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:30.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.224 --rc genhtml_branch_coverage=1 00:17:30.224 --rc genhtml_function_coverage=1 00:17:30.224 --rc genhtml_legend=1 00:17:30.224 --rc geninfo_all_blocks=1 00:17:30.224 --rc geninfo_unexecuted_blocks=1 00:17:30.224 00:17:30.224 ' 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:30.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.224 --rc genhtml_branch_coverage=1 00:17:30.224 --rc genhtml_function_coverage=1 00:17:30.224 --rc genhtml_legend=1 00:17:30.224 --rc geninfo_all_blocks=1 00:17:30.224 --rc geninfo_unexecuted_blocks=1 00:17:30.224 00:17:30.224 ' 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:30.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.224 --rc genhtml_branch_coverage=1 00:17:30.224 --rc genhtml_function_coverage=1 00:17:30.224 --rc genhtml_legend=1 00:17:30.224 --rc geninfo_all_blocks=1 00:17:30.224 --rc geninfo_unexecuted_blocks=1 00:17:30.224 00:17:30.224 ' 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:30.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.224 --rc genhtml_branch_coverage=1 00:17:30.224 --rc genhtml_function_coverage=1 00:17:30.224 --rc genhtml_legend=1 00:17:30.224 --rc geninfo_all_blocks=1 00:17:30.224 --rc geninfo_unexecuted_blocks=1 00:17:30.224 00:17:30.224 ' 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.224 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f675aab2-5fe6-4e14-b864-8d5e9da28a83 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3b6ac7ba-d218-4dc3-86cf-0b3901466ac1 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b0714a60-879d-46a8-980e-4b2a7b525cda 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:30.225 01:28:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:32.128 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:32.128 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:32.128 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:32.129 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:32.129 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:32.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:17:32.129 00:17:32.129 --- 10.0.0.2 ping statistics --- 00:17:32.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.129 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:32.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:17:32.129 00:17:32.129 --- 10.0.0.1 ping statistics --- 00:17:32.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.129 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1585271 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1585271 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1585271 ']' 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.129 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:32.129 [2024-10-13 01:28:17.664391] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:17:32.129 [2024-10-13 01:28:17.664582] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.388 [2024-10-13 01:28:17.729117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.388 [2024-10-13 01:28:17.773117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.388 [2024-10-13 01:28:17.773172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.388 [2024-10-13 01:28:17.773194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.388 [2024-10-13 01:28:17.773214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.388 [2024-10-13 01:28:17.773225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.388 [2024-10-13 01:28:17.773956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.388 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.388 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:32.388 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:32.388 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:32.388 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:32.388 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.388 01:28:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:32.646 [2024-10-13 01:28:18.160905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.646 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:32.646 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:32.646 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:33.210 Malloc1 00:17:33.210 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:33.468 Malloc2 00:17:33.468 01:28:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:33.726 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:33.984 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.242 [2024-10-13 01:28:19.799636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.500 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:34.500 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b0714a60-879d-46a8-980e-4b2a7b525cda -a 10.0.0.2 -s 4420 -i 4 00:17:34.500 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:34.500 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:34.500 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.500 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:34.500 01:28:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:36.399 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:36.399 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:36.399 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.399 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:36.399 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.399 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:36.399 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:36.399 01:28:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:36.657 [ 0]:0x1 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=375167fa2b8841f783717d4a3a43fc65 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 375167fa2b8841f783717d4a3a43fc65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.657 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:36.915 [ 0]:0x1 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=375167fa2b8841f783717d4a3a43fc65 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 375167fa2b8841f783717d4a3a43fc65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:36.915 [ 1]:0x2 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fad846e3a2654dc0b39f02863889cdcc 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fad846e3a2654dc0b39f02863889cdcc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:36.915 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:37.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.173 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:37.431 01:28:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:37.689 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:37.689 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b0714a60-879d-46a8-980e-4b2a7b525cda -a 10.0.0.2 -s 4420 -i 4 00:17:37.689 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:37.689 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:37.689 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:37.689 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:17:37.689 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:17:37.689 01:28:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:39.597 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:39.597 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:39.597 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:39.597 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:39.597 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:39.597 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:39.597 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:39.597 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:39.860 [ 0]:0x2 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fad846e3a2654dc0b39f02863889cdcc 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fad846e3a2654dc0b39f02863889cdcc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:39.860 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:40.118 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:40.118 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.119 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:40.119 [ 0]:0x1 00:17:40.119 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:40.119 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.119 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=375167fa2b8841f783717d4a3a43fc65 00:17:40.119 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 375167fa2b8841f783717d4a3a43fc65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.119 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:40.119 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.119 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:40.377 [ 1]:0x2 00:17:40.377 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:40.377 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.377 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fad846e3a2654dc0b39f02863889cdcc 00:17:40.377 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fad846e3a2654dc0b39f02863889cdcc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.377 01:28:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:40.635 [ 0]:0x2 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fad846e3a2654dc0b39f02863889cdcc 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fad846e3a2654dc0b39f02863889cdcc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:40.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:40.635 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:40.893 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:40.893 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b0714a60-879d-46a8-980e-4b2a7b525cda -a 10.0.0.2 -s 4420 -i 4 00:17:41.151 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:41.151 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:17:41.151 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:41.151 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:41.151 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:41.151 01:28:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.679 [ 0]:0x1 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=375167fa2b8841f783717d4a3a43fc65 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 375167fa2b8841f783717d4a3a43fc65 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.679 [ 1]:0x2 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fad846e3a2654dc0b39f02863889cdcc 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fad846e3a2654dc0b39f02863889cdcc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.679 01:28:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:43.937 [ 0]:0x2 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fad846e3a2654dc0b39f02863889cdcc 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fad846e3a2654dc0b39f02863889cdcc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:43.937 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:44.195 [2024-10-13 01:28:29.669268] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:44.195 request: 00:17:44.195 { 00:17:44.195 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.195 "nsid": 2, 00:17:44.195 "host": "nqn.2016-06.io.spdk:host1", 00:17:44.195 "method": "nvmf_ns_remove_host", 00:17:44.195 "req_id": 1 00:17:44.195 } 00:17:44.195 Got JSON-RPC error response 00:17:44.195 response: 00:17:44.195 { 00:17:44.195 "code": -32602, 00:17:44.195 "message": "Invalid parameters" 00:17:44.195 } 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:44.195 [ 0]:0x2 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:44.195 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fad846e3a2654dc0b39f02863889cdcc 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fad846e3a2654dc0b39f02863889cdcc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:44.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1586862 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1586862 /var/tmp/host.sock 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1586862 ']' 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:44.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.453 01:28:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.453 [2024-10-13 01:28:29.884634] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:17:44.453 [2024-10-13 01:28:29.884711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586862 ] 00:17:44.453 [2024-10-13 01:28:29.944113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.453 [2024-10-13 01:28:29.989835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.712 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.712 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:17:44.712 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:45.276 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:45.534 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f675aab2-5fe6-4e14-b864-8d5e9da28a83 00:17:45.534 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:45.534 01:28:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F675AAB25FE64E14B8648D5E9DA28A83 -i 00:17:45.792 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3b6ac7ba-d218-4dc3-86cf-0b3901466ac1 00:17:45.792 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:17:45.792 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3B6AC7BAD2184DC386CF0B3901466AC1 -i 00:17:46.049 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:46.307 01:28:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:46.565 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:46.565 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:46.823 nvme0n1 00:17:46.823 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:46.823 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:47.389 nvme1n2 00:17:47.389 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:47.389 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:47.389 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:47.389 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:47.389 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:47.647 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:47.647 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:47.647 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:47.647 01:28:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:47.905 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f675aab2-5fe6-4e14-b864-8d5e9da28a83 == \f\6\7\5\a\a\b\2\-\5\f\e\6\-\4\e\1\4\-\b\8\6\4\-\8\d\5\e\9\d\a\2\8\a\8\3 ]] 00:17:47.905 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:47.905 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:47.905 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3b6ac7ba-d218-4dc3-86cf-0b3901466ac1 == \3\b\6\a\c\7\b\a\-\d\2\1\8\-\4\d\c\3\-\8\6\c\f\-\0\b\3\9\0\1\4\6\6\a\c\1 ]] 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1586862 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1586862 ']' 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1586862 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1586862 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1586862' 00:17:48.163 killing process with pid 1586862 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1586862 00:17:48.163 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1586862 00:17:48.420 01:28:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:48.985 rmmod nvme_tcp 00:17:48.985 rmmod nvme_fabrics 00:17:48.985 rmmod nvme_keyring 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1585271 ']' 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1585271 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1585271 ']' 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1585271 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1585271 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1585271' 00:17:48.985 killing process with pid 1585271 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1585271 00:17:48.985 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1585271 00:17:49.243 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:49.243 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:49.243 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:49.244 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:17:49.244 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:17:49.244 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:49.244 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:17:49.244 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:49.244 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:49.244 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.244 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.244 01:28:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.198 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:51.198 00:17:51.198 real 0m21.434s 00:17:51.198 user 0m28.660s 00:17:51.198 sys 0m4.079s 00:17:51.198 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.198 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:51.198 ************************************ 00:17:51.198 END TEST nvmf_ns_masking 00:17:51.198 ************************************ 00:17:51.198 01:28:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:51.198 01:28:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:51.198 01:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:51.198 01:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.198 01:28:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:51.198 ************************************ 00:17:51.198 START TEST nvmf_nvme_cli 00:17:51.198 ************************************ 00:17:51.198 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:51.456 * Looking for test storage... 00:17:51.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.456 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:51.456 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:17:51.456 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:51.456 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:51.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.457 --rc genhtml_branch_coverage=1 00:17:51.457 --rc genhtml_function_coverage=1 00:17:51.457 --rc genhtml_legend=1 00:17:51.457 --rc geninfo_all_blocks=1 00:17:51.457 --rc geninfo_unexecuted_blocks=1 00:17:51.457 00:17:51.457 ' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:51.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.457 --rc genhtml_branch_coverage=1 00:17:51.457 --rc genhtml_function_coverage=1 00:17:51.457 --rc genhtml_legend=1 00:17:51.457 --rc geninfo_all_blocks=1 00:17:51.457 --rc geninfo_unexecuted_blocks=1 00:17:51.457 00:17:51.457 ' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:51.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.457 --rc genhtml_branch_coverage=1 00:17:51.457 --rc genhtml_function_coverage=1 00:17:51.457 --rc genhtml_legend=1 00:17:51.457 --rc geninfo_all_blocks=1 00:17:51.457 --rc geninfo_unexecuted_blocks=1 00:17:51.457 00:17:51.457 ' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:51.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.457 --rc genhtml_branch_coverage=1 00:17:51.457 --rc genhtml_function_coverage=1 00:17:51.457 --rc genhtml_legend=1 00:17:51.457 --rc geninfo_all_blocks=1 00:17:51.457 --rc geninfo_unexecuted_blocks=1 00:17:51.457 00:17:51.457 ' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.457 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:51.458 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:51.458 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:17:51.458 01:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:53.989 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:53.989 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:53.989 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:53.989 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.989 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:53.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:17:53.990 00:17:53.990 --- 10.0.0.2 ping statistics --- 00:17:53.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.990 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:17:53.990 00:17:53.990 --- 10.0.0.1 ping statistics --- 00:17:53.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.990 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1589483 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1589483 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1589483 ']' 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.990 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:53.990 [2024-10-13 01:28:39.322391] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:17:53.990 [2024-10-13 01:28:39.322496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.990 [2024-10-13 01:28:39.393487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.990 [2024-10-13 01:28:39.445068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.990 [2024-10-13 01:28:39.445125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.990 [2024-10-13 01:28:39.445139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.990 [2024-10-13 01:28:39.445150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.990 [2024-10-13 01:28:39.445160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.990 [2024-10-13 01:28:39.446724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.990 [2024-10-13 01:28:39.446750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.990 [2024-10-13 01:28:39.446774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.990 [2024-10-13 01:28:39.446777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.248 [2024-10-13 01:28:39.602592] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.248 Malloc0 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.248 Malloc1 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.248 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.249 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.249 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.249 [2024-10-13 01:28:39.695748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.249 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.249 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:54.249 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.249 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:54.249 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.249 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:54.506 00:17:54.506 Discovery Log Number of Records 2, Generation counter 2 00:17:54.506 =====Discovery Log Entry 0====== 00:17:54.507 trtype: tcp 00:17:54.507 adrfam: ipv4 00:17:54.507 subtype: current discovery subsystem 00:17:54.507 treq: not required 00:17:54.507 portid: 0 00:17:54.507 trsvcid: 4420 00:17:54.507 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:54.507 traddr: 10.0.0.2 00:17:54.507 eflags: explicit discovery connections, duplicate discovery information 00:17:54.507 sectype: none 00:17:54.507 =====Discovery Log Entry 1====== 00:17:54.507 trtype: tcp 00:17:54.507 adrfam: ipv4 00:17:54.507 subtype: nvme subsystem 00:17:54.507 treq: not required 00:17:54.507 portid: 0 00:17:54.507 trsvcid: 4420 00:17:54.507 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:54.507 traddr: 10.0.0.2 00:17:54.507 eflags: none 00:17:54.507 sectype: none 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:54.507 01:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:55.072 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:55.072 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:17:55.072 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.072 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:17:55.072 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:17:55.072 01:28:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:56.970 /dev/nvme0n2 ]] 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:56.970 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:57.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.228 rmmod nvme_tcp 00:17:57.228 rmmod nvme_fabrics 00:17:57.228 rmmod nvme_keyring 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1589483 ']' 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1589483 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1589483 ']' 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1589483 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1589483 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1589483' 00:17:57.228 killing process with pid 1589483 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1589483 00:17:57.228 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1589483 00:17:57.487 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:57.487 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:17:57.487 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:17:57.487 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:17:57.487 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:17:57.487 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:17:57.488 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:17:57.488 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.488 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:57.488 01:28:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.488 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.488 01:28:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.020 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:00.021 00:18:00.021 real 0m8.277s 00:18:00.021 user 0m14.829s 00:18:00.021 sys 0m2.325s 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:00.021 ************************************ 00:18:00.021 END TEST nvmf_nvme_cli 00:18:00.021 ************************************ 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.021 ************************************ 00:18:00.021 START TEST nvmf_vfio_user 00:18:00.021 ************************************ 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:00.021 * Looking for test storage... 00:18:00.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:00.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.021 --rc genhtml_branch_coverage=1 00:18:00.021 --rc genhtml_function_coverage=1 00:18:00.021 --rc genhtml_legend=1 00:18:00.021 --rc geninfo_all_blocks=1 00:18:00.021 --rc geninfo_unexecuted_blocks=1 00:18:00.021 00:18:00.021 ' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:00.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.021 --rc genhtml_branch_coverage=1 00:18:00.021 --rc genhtml_function_coverage=1 00:18:00.021 --rc genhtml_legend=1 00:18:00.021 --rc geninfo_all_blocks=1 00:18:00.021 --rc geninfo_unexecuted_blocks=1 00:18:00.021 00:18:00.021 ' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:00.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.021 --rc genhtml_branch_coverage=1 00:18:00.021 --rc genhtml_function_coverage=1 00:18:00.021 --rc genhtml_legend=1 00:18:00.021 --rc geninfo_all_blocks=1 00:18:00.021 --rc geninfo_unexecuted_blocks=1 00:18:00.021 00:18:00.021 ' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:00.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.021 --rc genhtml_branch_coverage=1 00:18:00.021 --rc genhtml_function_coverage=1 00:18:00.021 --rc genhtml_legend=1 00:18:00.021 --rc geninfo_all_blocks=1 00:18:00.021 --rc geninfo_unexecuted_blocks=1 00:18:00.021 00:18:00.021 ' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:00.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1590294 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1590294' 00:18:00.021 Process pid: 1590294 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1590294 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1590294 ']' 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:00.021 [2024-10-13 01:28:45.306636] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:18:00.021 [2024-10-13 01:28:45.306724] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.021 [2024-10-13 01:28:45.371538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.021 [2024-10-13 01:28:45.421909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.021 [2024-10-13 01:28:45.421971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.021 [2024-10-13 01:28:45.421988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.021 [2024-10-13 01:28:45.422002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.021 [2024-10-13 01:28:45.422014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.021 [2024-10-13 01:28:45.426495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.021 [2024-10-13 01:28:45.426544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.021 [2024-10-13 01:28:45.426628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.021 [2024-10-13 01:28:45.426631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:00.021 01:28:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:01.393 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:01.393 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:01.393 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:01.393 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:01.393 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:01.393 01:28:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:01.958 Malloc1 00:18:01.958 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:01.958 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:02.215 01:28:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:02.780 01:28:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:02.780 01:28:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:02.780 01:28:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:02.780 Malloc2 00:18:02.780 01:28:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:03.345 01:28:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:03.345 01:28:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:03.603 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:03.603 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:03.603 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:03.603 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:03.603 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:03.603 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:03.603 [2024-10-13 01:28:49.170807] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:18:03.603 [2024-10-13 01:28:49.170862] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590719 ] 00:18:03.862 [2024-10-13 01:28:49.202066] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:03.862 [2024-10-13 01:28:49.215009] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:03.862 [2024-10-13 01:28:49.215037] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd6710e2000 00:18:03.862 [2024-10-13 01:28:49.215999] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.862 [2024-10-13 01:28:49.216987] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.862 [2024-10-13 01:28:49.217991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.862 [2024-10-13 01:28:49.219000] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:03.862 [2024-10-13 01:28:49.220000] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:03.862 [2024-10-13 01:28:49.221006] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.862 [2024-10-13 01:28:49.222013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:03.862 [2024-10-13 01:28:49.223017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:03.862 [2024-10-13 01:28:49.224027] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:03.862 [2024-10-13 01:28:49.224047] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd66fdda000 00:18:03.862 [2024-10-13 01:28:49.225178] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:03.862 [2024-10-13 01:28:49.240840] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:03.862 [2024-10-13 01:28:49.240885] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:18:03.862 [2024-10-13 01:28:49.243139] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:03.862 [2024-10-13 01:28:49.243192] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:03.862 [2024-10-13 01:28:49.243293] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:18:03.862 [2024-10-13 01:28:49.243328] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:18:03.862 [2024-10-13 01:28:49.243340] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:18:03.862 [2024-10-13 01:28:49.244126] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:03.862 [2024-10-13 01:28:49.244147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:18:03.862 [2024-10-13 01:28:49.244159] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:18:03.862 [2024-10-13 01:28:49.245136] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:03.862 [2024-10-13 01:28:49.245156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:18:03.862 [2024-10-13 01:28:49.245178] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:18:03.862 [2024-10-13 01:28:49.246137] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:03.862 [2024-10-13 01:28:49.246156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:03.862 [2024-10-13 01:28:49.247149] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:03.862 [2024-10-13 01:28:49.247168] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:18:03.863 [2024-10-13 01:28:49.247176] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:18:03.863 [2024-10-13 01:28:49.247187] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:03.863 [2024-10-13 01:28:49.247296] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:18:03.863 [2024-10-13 01:28:49.247304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:03.863 [2024-10-13 01:28:49.247313] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:03.863 [2024-10-13 01:28:49.248153] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:03.863 [2024-10-13 01:28:49.249157] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:03.863 [2024-10-13 01:28:49.250163] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:03.863 [2024-10-13 01:28:49.251157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:03.863 [2024-10-13 01:28:49.251255] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:03.863 [2024-10-13 01:28:49.252169] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:03.863 [2024-10-13 01:28:49.252186] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:03.863 [2024-10-13 01:28:49.252195] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252219] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:18:03.863 [2024-10-13 01:28:49.252232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252268] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:03.863 [2024-10-13 01:28:49.252278] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:03.863 [2024-10-13 01:28:49.252285] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.863 [2024-10-13 01:28:49.252307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.252379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.252402] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:18:03.863 [2024-10-13 01:28:49.252411] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:18:03.863 [2024-10-13 01:28:49.252418] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:18:03.863 [2024-10-13 01:28:49.252425] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:03.863 [2024-10-13 01:28:49.252433] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:18:03.863 [2024-10-13 01:28:49.252441] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:18:03.863 [2024-10-13 01:28:49.252448] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252487] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.252528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.252547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.863 [2024-10-13 01:28:49.252559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.863 [2024-10-13 01:28:49.252571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.863 [2024-10-13 01:28:49.252583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.863 [2024-10-13 01:28:49.252592] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.252634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.252646] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:18:03.863 [2024-10-13 01:28:49.252654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252665] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252680] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.252706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.252787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252807] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252821] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:03.863 [2024-10-13 01:28:49.252829] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:03.863 [2024-10-13 01:28:49.252835] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.863 [2024-10-13 01:28:49.252844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.252862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.252887] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:18:03.863 [2024-10-13 01:28:49.252904] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252919] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.252930] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:03.863 [2024-10-13 01:28:49.252938] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:03.863 [2024-10-13 01:28:49.252944] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.863 [2024-10-13 01:28:49.252953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.252988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.253011] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.253026] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.253038] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:03.863 [2024-10-13 01:28:49.253045] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:03.863 [2024-10-13 01:28:49.253051] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.863 [2024-10-13 01:28:49.253060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.253073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.253088] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.253098] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.253112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.253123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.253132] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.253143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.253153] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:18:03.863 [2024-10-13 01:28:49.253160] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:18:03.863 [2024-10-13 01:28:49.253169] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:18:03.863 [2024-10-13 01:28:49.253196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.253211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.253228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.253242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.253258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.253269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.253284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:03.863 [2024-10-13 01:28:49.253298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:03.863 [2024-10-13 01:28:49.253320] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:03.863 [2024-10-13 01:28:49.253330] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:03.864 [2024-10-13 01:28:49.253336] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:03.864 [2024-10-13 01:28:49.253342] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:03.864 [2024-10-13 01:28:49.253348] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:03.864 [2024-10-13 01:28:49.253357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:03.864 [2024-10-13 01:28:49.253368] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:03.864 [2024-10-13 01:28:49.253376] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:03.864 [2024-10-13 01:28:49.253382] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.864 [2024-10-13 01:28:49.253390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:03.864 [2024-10-13 01:28:49.253401] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:03.864 [2024-10-13 01:28:49.253408] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:03.864 [2024-10-13 01:28:49.253414] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.864 [2024-10-13 01:28:49.253422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:03.864 [2024-10-13 01:28:49.253434] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:03.864 [2024-10-13 01:28:49.253441] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:03.864 [2024-10-13 01:28:49.253465] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:03.864 [2024-10-13 01:28:49.253483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:03.864 [2024-10-13 01:28:49.253496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:03.864 [2024-10-13 01:28:49.253517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:03.864 [2024-10-13 01:28:49.253538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:03.864 [2024-10-13 01:28:49.253550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:03.864 ===================================================== 00:18:03.864 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:03.864 ===================================================== 00:18:03.864 Controller Capabilities/Features 00:18:03.864 ================================ 00:18:03.864 Vendor ID: 4e58 00:18:03.864 Subsystem Vendor ID: 4e58 00:18:03.864 Serial Number: SPDK1 00:18:03.864 Model Number: SPDK bdev Controller 00:18:03.864 Firmware Version: 25.01 00:18:03.864 Recommended Arb Burst: 6 00:18:03.864 IEEE OUI Identifier: 8d 6b 50 00:18:03.864 Multi-path I/O 00:18:03.864 May have multiple subsystem ports: Yes 00:18:03.864 May have multiple controllers: Yes 00:18:03.864 Associated with SR-IOV VF: No 00:18:03.864 Max Data Transfer Size: 131072 00:18:03.864 Max Number of Namespaces: 32 00:18:03.864 Max Number of I/O Queues: 127 00:18:03.864 NVMe Specification Version (VS): 1.3 00:18:03.864 NVMe Specification Version (Identify): 1.3 00:18:03.864 Maximum Queue Entries: 256 00:18:03.864 Contiguous Queues Required: Yes 00:18:03.864 Arbitration Mechanisms Supported 00:18:03.864 Weighted Round Robin: Not Supported 00:18:03.864 Vendor Specific: Not Supported 00:18:03.864 Reset Timeout: 15000 ms 00:18:03.864 Doorbell Stride: 4 bytes 00:18:03.864 NVM Subsystem Reset: Not Supported 00:18:03.864 Command Sets Supported 00:18:03.864 NVM Command Set: Supported 00:18:03.864 Boot Partition: Not Supported 00:18:03.864 Memory Page Size Minimum: 4096 bytes 00:18:03.864 Memory Page Size Maximum: 4096 bytes 00:18:03.864 Persistent Memory Region: Not Supported 00:18:03.864 Optional Asynchronous Events Supported 00:18:03.864 Namespace Attribute Notices: Supported 00:18:03.864 Firmware Activation Notices: Not Supported 00:18:03.864 ANA Change Notices: Not Supported 00:18:03.864 PLE Aggregate Log Change Notices: Not Supported 00:18:03.864 LBA Status Info Alert Notices: Not Supported 00:18:03.864 EGE Aggregate Log Change Notices: Not Supported 00:18:03.864 Normal NVM Subsystem Shutdown event: Not Supported 00:18:03.864 Zone Descriptor Change Notices: Not Supported 00:18:03.864 Discovery Log Change Notices: Not Supported 00:18:03.864 Controller Attributes 00:18:03.864 128-bit Host Identifier: Supported 00:18:03.864 Non-Operational Permissive Mode: Not Supported 00:18:03.864 NVM Sets: Not Supported 00:18:03.864 Read Recovery Levels: Not Supported 00:18:03.864 Endurance Groups: Not Supported 00:18:03.864 Predictable Latency Mode: Not Supported 00:18:03.864 Traffic Based Keep ALive: Not Supported 00:18:03.864 Namespace Granularity: Not Supported 00:18:03.864 SQ Associations: Not Supported 00:18:03.864 UUID List: Not Supported 00:18:03.864 Multi-Domain Subsystem: Not Supported 00:18:03.864 Fixed Capacity Management: Not Supported 00:18:03.864 Variable Capacity Management: Not Supported 00:18:03.864 Delete Endurance Group: Not Supported 00:18:03.864 Delete NVM Set: Not Supported 00:18:03.864 Extended LBA Formats Supported: Not Supported 00:18:03.864 Flexible Data Placement Supported: Not Supported 00:18:03.864 00:18:03.864 Controller Memory Buffer Support 00:18:03.864 ================================ 00:18:03.864 Supported: No 00:18:03.864 00:18:03.864 Persistent Memory Region Support 00:18:03.864 ================================ 00:18:03.864 Supported: No 00:18:03.864 00:18:03.864 Admin Command Set Attributes 00:18:03.864 ============================ 00:18:03.864 Security Send/Receive: Not Supported 00:18:03.864 Format NVM: Not Supported 00:18:03.864 Firmware Activate/Download: Not Supported 00:18:03.864 Namespace Management: Not Supported 00:18:03.864 Device Self-Test: Not Supported 00:18:03.864 Directives: Not Supported 00:18:03.864 NVMe-MI: Not Supported 00:18:03.864 Virtualization Management: Not Supported 00:18:03.864 Doorbell Buffer Config: Not Supported 00:18:03.864 Get LBA Status Capability: Not Supported 00:18:03.864 Command & Feature Lockdown Capability: Not Supported 00:18:03.864 Abort Command Limit: 4 00:18:03.864 Async Event Request Limit: 4 00:18:03.864 Number of Firmware Slots: N/A 00:18:03.864 Firmware Slot 1 Read-Only: N/A 00:18:03.864 Firmware Activation Without Reset: N/A 00:18:03.864 Multiple Update Detection Support: N/A 00:18:03.864 Firmware Update Granularity: No Information Provided 00:18:03.864 Per-Namespace SMART Log: No 00:18:03.864 Asymmetric Namespace Access Log Page: Not Supported 00:18:03.864 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:03.864 Command Effects Log Page: Supported 00:18:03.864 Get Log Page Extended Data: Supported 00:18:03.864 Telemetry Log Pages: Not Supported 00:18:03.864 Persistent Event Log Pages: Not Supported 00:18:03.864 Supported Log Pages Log Page: May Support 00:18:03.864 Commands Supported & Effects Log Page: Not Supported 00:18:03.864 Feature Identifiers & Effects Log Page:May Support 00:18:03.864 NVMe-MI Commands & Effects Log Page: May Support 00:18:03.864 Data Area 4 for Telemetry Log: Not Supported 00:18:03.864 Error Log Page Entries Supported: 128 00:18:03.864 Keep Alive: Supported 00:18:03.864 Keep Alive Granularity: 10000 ms 00:18:03.864 00:18:03.864 NVM Command Set Attributes 00:18:03.864 ========================== 00:18:03.864 Submission Queue Entry Size 00:18:03.864 Max: 64 00:18:03.864 Min: 64 00:18:03.864 Completion Queue Entry Size 00:18:03.864 Max: 16 00:18:03.864 Min: 16 00:18:03.864 Number of Namespaces: 32 00:18:03.864 Compare Command: Supported 00:18:03.864 Write Uncorrectable Command: Not Supported 00:18:03.864 Dataset Management Command: Supported 00:18:03.864 Write Zeroes Command: Supported 00:18:03.864 Set Features Save Field: Not Supported 00:18:03.864 Reservations: Not Supported 00:18:03.864 Timestamp: Not Supported 00:18:03.864 Copy: Supported 00:18:03.864 Volatile Write Cache: Present 00:18:03.864 Atomic Write Unit (Normal): 1 00:18:03.864 Atomic Write Unit (PFail): 1 00:18:03.864 Atomic Compare & Write Unit: 1 00:18:03.864 Fused Compare & Write: Supported 00:18:03.864 Scatter-Gather List 00:18:03.864 SGL Command Set: Supported (Dword aligned) 00:18:03.864 SGL Keyed: Not Supported 00:18:03.864 SGL Bit Bucket Descriptor: Not Supported 00:18:03.864 SGL Metadata Pointer: Not Supported 00:18:03.864 Oversized SGL: Not Supported 00:18:03.864 SGL Metadata Address: Not Supported 00:18:03.864 SGL Offset: Not Supported 00:18:03.864 Transport SGL Data Block: Not Supported 00:18:03.864 Replay Protected Memory Block: Not Supported 00:18:03.864 00:18:03.864 Firmware Slot Information 00:18:03.864 ========================= 00:18:03.864 Active slot: 1 00:18:03.864 Slot 1 Firmware Revision: 25.01 00:18:03.864 00:18:03.864 00:18:03.864 Commands Supported and Effects 00:18:03.864 ============================== 00:18:03.864 Admin Commands 00:18:03.864 -------------- 00:18:03.864 Get Log Page (02h): Supported 00:18:03.864 Identify (06h): Supported 00:18:03.864 Abort (08h): Supported 00:18:03.864 Set Features (09h): Supported 00:18:03.864 Get Features (0Ah): Supported 00:18:03.864 Asynchronous Event Request (0Ch): Supported 00:18:03.864 Keep Alive (18h): Supported 00:18:03.864 I/O Commands 00:18:03.864 ------------ 00:18:03.864 Flush (00h): Supported LBA-Change 00:18:03.864 Write (01h): Supported LBA-Change 00:18:03.864 Read (02h): Supported 00:18:03.864 Compare (05h): Supported 00:18:03.864 Write Zeroes (08h): Supported LBA-Change 00:18:03.865 Dataset Management (09h): Supported LBA-Change 00:18:03.865 Copy (19h): Supported LBA-Change 00:18:03.865 00:18:03.865 Error Log 00:18:03.865 ========= 00:18:03.865 00:18:03.865 Arbitration 00:18:03.865 =========== 00:18:03.865 Arbitration Burst: 1 00:18:03.865 00:18:03.865 Power Management 00:18:03.865 ================ 00:18:03.865 Number of Power States: 1 00:18:03.865 Current Power State: Power State #0 00:18:03.865 Power State #0: 00:18:03.865 Max Power: 0.00 W 00:18:03.865 Non-Operational State: Operational 00:18:03.865 Entry Latency: Not Reported 00:18:03.865 Exit Latency: Not Reported 00:18:03.865 Relative Read Throughput: 0 00:18:03.865 Relative Read Latency: 0 00:18:03.865 Relative Write Throughput: 0 00:18:03.865 Relative Write Latency: 0 00:18:03.865 Idle Power: Not Reported 00:18:03.865 Active Power: Not Reported 00:18:03.865 Non-Operational Permissive Mode: Not Supported 00:18:03.865 00:18:03.865 Health Information 00:18:03.865 ================== 00:18:03.865 Critical Warnings: 00:18:03.865 Available Spare Space: OK 00:18:03.865 Temperature: OK 00:18:03.865 Device Reliability: OK 00:18:03.865 Read Only: No 00:18:03.865 Volatile Memory Backup: OK 00:18:03.865 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:03.865 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:03.865 Available Spare: 0% 00:18:03.865 Available Sp[2024-10-13 01:28:49.253667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:03.865 [2024-10-13 01:28:49.253683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:03.865 [2024-10-13 01:28:49.253730] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:18:03.865 [2024-10-13 01:28:49.253749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.865 [2024-10-13 01:28:49.253761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.865 [2024-10-13 01:28:49.253785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.865 [2024-10-13 01:28:49.253794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.865 [2024-10-13 01:28:49.256483] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:03.865 [2024-10-13 01:28:49.256506] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:03.865 [2024-10-13 01:28:49.257195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:03.865 [2024-10-13 01:28:49.257268] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:18:03.865 [2024-10-13 01:28:49.257281] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:18:03.865 [2024-10-13 01:28:49.258203] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:03.865 [2024-10-13 01:28:49.258226] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:18:03.865 [2024-10-13 01:28:49.258288] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:03.865 [2024-10-13 01:28:49.261480] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:03.865 are Threshold: 0% 00:18:03.865 Life Percentage Used: 0% 00:18:03.865 Data Units Read: 0 00:18:03.865 Data Units Written: 0 00:18:03.865 Host Read Commands: 0 00:18:03.865 Host Write Commands: 0 00:18:03.865 Controller Busy Time: 0 minutes 00:18:03.865 Power Cycles: 0 00:18:03.865 Power On Hours: 0 hours 00:18:03.865 Unsafe Shutdowns: 0 00:18:03.865 Unrecoverable Media Errors: 0 00:18:03.865 Lifetime Error Log Entries: 0 00:18:03.865 Warning Temperature Time: 0 minutes 00:18:03.865 Critical Temperature Time: 0 minutes 00:18:03.865 00:18:03.865 Number of Queues 00:18:03.865 ================ 00:18:03.865 Number of I/O Submission Queues: 127 00:18:03.865 Number of I/O Completion Queues: 127 00:18:03.865 00:18:03.865 Active Namespaces 00:18:03.865 ================= 00:18:03.865 Namespace ID:1 00:18:03.865 Error Recovery Timeout: Unlimited 00:18:03.865 Command Set Identifier: NVM (00h) 00:18:03.865 Deallocate: Supported 00:18:03.865 Deallocated/Unwritten Error: Not Supported 00:18:03.865 Deallocated Read Value: Unknown 00:18:03.865 Deallocate in Write Zeroes: Not Supported 00:18:03.865 Deallocated Guard Field: 0xFFFF 00:18:03.865 Flush: Supported 00:18:03.865 Reservation: Supported 00:18:03.865 Namespace Sharing Capabilities: Multiple Controllers 00:18:03.865 Size (in LBAs): 131072 (0GiB) 00:18:03.865 Capacity (in LBAs): 131072 (0GiB) 00:18:03.865 Utilization (in LBAs): 131072 (0GiB) 00:18:03.865 NGUID: B62CEC0E16A34DAFAF958B0C29BC506E 00:18:03.865 UUID: b62cec0e-16a3-4daf-af95-8b0c29bc506e 00:18:03.865 Thin Provisioning: Not Supported 00:18:03.865 Per-NS Atomic Units: Yes 00:18:03.865 Atomic Boundary Size (Normal): 0 00:18:03.865 Atomic Boundary Size (PFail): 0 00:18:03.865 Atomic Boundary Offset: 0 00:18:03.865 Maximum Single Source Range Length: 65535 00:18:03.865 Maximum Copy Length: 65535 00:18:03.865 Maximum Source Range Count: 1 00:18:03.865 NGUID/EUI64 Never Reused: No 00:18:03.865 Namespace Write Protected: No 00:18:03.865 Number of LBA Formats: 1 00:18:03.865 Current LBA Format: LBA Format #00 00:18:03.865 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:03.865 00:18:03.865 01:28:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:04.123 [2024-10-13 01:28:49.492307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:09.386 Initializing NVMe Controllers 00:18:09.386 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:09.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:09.386 Initialization complete. Launching workers. 00:18:09.386 ======================================================== 00:18:09.386 Latency(us) 00:18:09.386 Device Information : IOPS MiB/s Average min max 00:18:09.386 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33588.09 131.20 3810.22 1177.43 9636.62 00:18:09.386 ======================================================== 00:18:09.386 Total : 33588.09 131.20 3810.22 1177.43 9636.62 00:18:09.386 00:18:09.386 [2024-10-13 01:28:54.514432] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:09.386 01:28:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:09.386 [2024-10-13 01:28:54.757608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:14.648 Initializing NVMe Controllers 00:18:14.648 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:14.648 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:14.648 Initialization complete. Launching workers. 00:18:14.648 ======================================================== 00:18:14.648 Latency(us) 00:18:14.648 Device Information : IOPS MiB/s Average min max 00:18:14.648 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15999.06 62.50 8005.73 5949.05 15808.21 00:18:14.648 ======================================================== 00:18:14.648 Total : 15999.06 62.50 8005.73 5949.05 15808.21 00:18:14.648 00:18:14.648 [2024-10-13 01:28:59.796560] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:14.648 01:28:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:14.648 [2024-10-13 01:29:00.002674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:19.911 [2024-10-13 01:29:05.066803] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:19.911 Initializing NVMe Controllers 00:18:19.911 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:19.911 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:19.911 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:19.911 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:19.911 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:19.911 Initialization complete. Launching workers. 00:18:19.911 Starting thread on core 2 00:18:19.911 Starting thread on core 3 00:18:19.911 Starting thread on core 1 00:18:19.911 01:29:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:19.911 [2024-10-13 01:29:05.374388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:24.093 [2024-10-13 01:29:09.104797] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:24.093 Initializing NVMe Controllers 00:18:24.093 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:24.093 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:24.093 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:24.093 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:24.093 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:24.093 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:24.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:24.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:24.093 Initialization complete. Launching workers. 00:18:24.093 Starting thread on core 1 with urgent priority queue 00:18:24.093 Starting thread on core 2 with urgent priority queue 00:18:24.093 Starting thread on core 3 with urgent priority queue 00:18:24.093 Starting thread on core 0 with urgent priority queue 00:18:24.093 SPDK bdev Controller (SPDK1 ) core 0: 1125.00 IO/s 88.89 secs/100000 ios 00:18:24.093 SPDK bdev Controller (SPDK1 ) core 1: 1205.00 IO/s 82.99 secs/100000 ios 00:18:24.093 SPDK bdev Controller (SPDK1 ) core 2: 1257.67 IO/s 79.51 secs/100000 ios 00:18:24.093 SPDK bdev Controller (SPDK1 ) core 3: 1144.67 IO/s 87.36 secs/100000 ios 00:18:24.093 ======================================================== 00:18:24.093 00:18:24.093 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:24.093 [2024-10-13 01:29:09.395031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:24.093 Initializing NVMe Controllers 00:18:24.093 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:24.093 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:24.093 Namespace ID: 1 size: 0GB 00:18:24.093 Initialization complete. 00:18:24.093 INFO: using host memory buffer for IO 00:18:24.093 Hello world! 00:18:24.093 [2024-10-13 01:29:09.429759] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:24.093 01:29:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:24.351 [2024-10-13 01:29:09.725927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:25.285 Initializing NVMe Controllers 00:18:25.285 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:25.285 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:25.285 Initialization complete. Launching workers. 00:18:25.285 submit (in ns) avg, min, max = 7633.4, 3487.8, 4037936.7 00:18:25.285 complete (in ns) avg, min, max = 25709.8, 2081.1, 4020896.7 00:18:25.285 00:18:25.285 Submit histogram 00:18:25.285 ================ 00:18:25.285 Range in us Cumulative Count 00:18:25.285 3.484 - 3.508: 0.0881% ( 11) 00:18:25.285 3.508 - 3.532: 0.7849% ( 87) 00:18:25.285 3.532 - 3.556: 2.2345% ( 181) 00:18:25.285 3.556 - 3.579: 5.5342% ( 412) 00:18:25.285 3.579 - 3.603: 10.8922% ( 669) 00:18:25.285 3.603 - 3.627: 17.9962% ( 887) 00:18:25.285 3.627 - 3.650: 24.6516% ( 831) 00:18:25.285 3.650 - 3.674: 30.2018% ( 693) 00:18:25.285 3.674 - 3.698: 35.6880% ( 685) 00:18:25.285 3.698 - 3.721: 42.7999% ( 888) 00:18:25.285 3.721 - 3.745: 48.1900% ( 673) 00:18:25.285 3.745 - 3.769: 53.0114% ( 602) 00:18:25.285 3.769 - 3.793: 57.2001% ( 523) 00:18:25.285 3.793 - 3.816: 61.1645% ( 495) 00:18:25.285 3.816 - 3.840: 65.1450% ( 497) 00:18:25.285 3.840 - 3.864: 69.6460% ( 562) 00:18:25.285 3.864 - 3.887: 73.8747% ( 528) 00:18:25.285 3.887 - 3.911: 76.9021% ( 378) 00:18:25.285 3.911 - 3.935: 79.9616% ( 382) 00:18:25.285 3.935 - 3.959: 82.3883% ( 303) 00:18:25.285 3.959 - 3.982: 84.5507% ( 270) 00:18:25.285 3.982 - 4.006: 86.4648% ( 239) 00:18:25.285 4.006 - 4.030: 87.6902% ( 153) 00:18:25.285 4.030 - 4.053: 88.9156% ( 153) 00:18:25.285 4.053 - 4.077: 90.2050% ( 161) 00:18:25.285 4.077 - 4.101: 91.3984% ( 149) 00:18:25.285 4.101 - 4.124: 92.2874% ( 111) 00:18:25.285 4.124 - 4.148: 93.0322% ( 93) 00:18:25.285 4.148 - 4.172: 93.5448% ( 64) 00:18:25.285 4.172 - 4.196: 93.9452% ( 50) 00:18:25.285 4.196 - 4.219: 94.3697% ( 53) 00:18:25.285 4.219 - 4.243: 94.5939% ( 28) 00:18:25.285 4.243 - 4.267: 94.7541% ( 20) 00:18:25.285 4.267 - 4.290: 94.8743% ( 15) 00:18:25.285 4.290 - 4.314: 95.0104% ( 17) 00:18:25.285 4.314 - 4.338: 95.1786% ( 21) 00:18:25.285 4.338 - 4.361: 95.2106% ( 4) 00:18:25.285 4.361 - 4.385: 95.2827% ( 9) 00:18:25.285 4.385 - 4.409: 95.3708% ( 11) 00:18:25.285 4.409 - 4.433: 95.4429% ( 9) 00:18:25.285 4.433 - 4.456: 95.4669% ( 3) 00:18:25.285 4.456 - 4.480: 95.4829% ( 2) 00:18:25.285 4.480 - 4.504: 95.5230% ( 5) 00:18:25.285 4.504 - 4.527: 95.5470% ( 3) 00:18:25.285 4.527 - 4.551: 95.5630% ( 2) 00:18:25.285 4.551 - 4.575: 95.5871% ( 3) 00:18:25.285 4.575 - 4.599: 95.5951% ( 1) 00:18:25.285 4.622 - 4.646: 95.6191% ( 3) 00:18:25.285 4.646 - 4.670: 95.6351% ( 2) 00:18:25.285 4.670 - 4.693: 95.6511% ( 2) 00:18:25.285 4.693 - 4.717: 95.6912% ( 5) 00:18:25.285 4.717 - 4.741: 95.7152% ( 3) 00:18:25.285 4.741 - 4.764: 95.7312% ( 2) 00:18:25.285 4.764 - 4.788: 95.7873% ( 7) 00:18:25.285 4.788 - 4.812: 95.8514% ( 8) 00:18:25.285 4.812 - 4.836: 95.8834% ( 4) 00:18:25.285 4.836 - 4.859: 95.9715% ( 11) 00:18:25.285 4.859 - 4.883: 96.0996% ( 16) 00:18:25.285 4.883 - 4.907: 96.1957% ( 12) 00:18:25.285 4.907 - 4.930: 96.2598% ( 8) 00:18:25.285 4.930 - 4.954: 96.2918% ( 4) 00:18:25.285 4.954 - 4.978: 96.3639% ( 9) 00:18:25.285 4.978 - 5.001: 96.3880% ( 3) 00:18:25.285 5.001 - 5.025: 96.4360% ( 6) 00:18:25.285 5.049 - 5.073: 96.4520% ( 2) 00:18:25.285 5.073 - 5.096: 96.4841% ( 4) 00:18:25.285 5.096 - 5.120: 96.5081% ( 3) 00:18:25.285 5.120 - 5.144: 96.5481% ( 5) 00:18:25.285 5.144 - 5.167: 96.5722% ( 3) 00:18:25.285 5.167 - 5.191: 96.5962% ( 3) 00:18:25.285 5.191 - 5.215: 96.6202% ( 3) 00:18:25.285 5.215 - 5.239: 96.6442% ( 3) 00:18:25.285 5.239 - 5.262: 96.6683% ( 3) 00:18:25.285 5.262 - 5.286: 96.7003% ( 4) 00:18:25.285 5.286 - 5.310: 96.7323% ( 4) 00:18:25.285 5.310 - 5.333: 96.7564% ( 3) 00:18:25.285 5.333 - 5.357: 96.7804% ( 3) 00:18:25.285 5.357 - 5.381: 96.8044% ( 3) 00:18:25.285 5.381 - 5.404: 96.8204% ( 2) 00:18:25.285 5.404 - 5.428: 96.8445% ( 3) 00:18:25.285 5.428 - 5.452: 96.8605% ( 2) 00:18:25.285 5.499 - 5.523: 96.8765% ( 2) 00:18:25.286 5.523 - 5.547: 96.8925% ( 2) 00:18:25.286 5.547 - 5.570: 96.9085% ( 2) 00:18:25.286 5.570 - 5.594: 96.9165% ( 1) 00:18:25.286 5.618 - 5.641: 96.9246% ( 1) 00:18:25.286 5.641 - 5.665: 96.9406% ( 2) 00:18:25.286 5.689 - 5.713: 96.9486% ( 1) 00:18:25.286 5.736 - 5.760: 96.9726% ( 3) 00:18:25.286 5.807 - 5.831: 96.9886% ( 2) 00:18:25.286 5.831 - 5.855: 96.9966% ( 1) 00:18:25.286 5.855 - 5.879: 97.0046% ( 1) 00:18:25.286 5.973 - 5.997: 97.0127% ( 1) 00:18:25.286 5.997 - 6.021: 97.0287% ( 2) 00:18:25.286 6.021 - 6.044: 97.0367% ( 1) 00:18:25.286 6.068 - 6.116: 97.0447% ( 1) 00:18:25.286 6.116 - 6.163: 97.0527% ( 1) 00:18:25.286 6.163 - 6.210: 97.0607% ( 1) 00:18:25.286 6.210 - 6.258: 97.0687% ( 1) 00:18:25.286 6.258 - 6.305: 97.0847% ( 2) 00:18:25.286 6.305 - 6.353: 97.0927% ( 1) 00:18:25.286 6.447 - 6.495: 97.1008% ( 1) 00:18:25.286 6.495 - 6.542: 97.1168% ( 2) 00:18:25.286 6.542 - 6.590: 97.1248% ( 1) 00:18:25.286 6.637 - 6.684: 97.1488% ( 3) 00:18:25.286 6.684 - 6.732: 97.1568% ( 1) 00:18:25.286 6.732 - 6.779: 97.1648% ( 1) 00:18:25.286 6.779 - 6.827: 97.1728% ( 1) 00:18:25.286 6.874 - 6.921: 97.1808% ( 1) 00:18:25.286 6.921 - 6.969: 97.1889% ( 1) 00:18:25.286 6.969 - 7.016: 97.1969% ( 1) 00:18:25.286 7.111 - 7.159: 97.2129% ( 2) 00:18:25.286 7.159 - 7.206: 97.2209% ( 1) 00:18:25.286 7.253 - 7.301: 97.2289% ( 1) 00:18:25.286 7.348 - 7.396: 97.2369% ( 1) 00:18:25.286 7.396 - 7.443: 97.2449% ( 1) 00:18:25.286 7.490 - 7.538: 97.2529% ( 1) 00:18:25.286 7.538 - 7.585: 97.2609% ( 1) 00:18:25.286 7.585 - 7.633: 97.2850% ( 3) 00:18:25.286 7.633 - 7.680: 97.2930% ( 1) 00:18:25.286 7.680 - 7.727: 97.3090% ( 2) 00:18:25.286 7.727 - 7.775: 97.3170% ( 1) 00:18:25.286 7.775 - 7.822: 97.3250% ( 1) 00:18:25.286 7.822 - 7.870: 97.3330% ( 1) 00:18:25.286 7.964 - 8.012: 97.3490% ( 2) 00:18:25.286 8.012 - 8.059: 97.3731% ( 3) 00:18:25.286 8.059 - 8.107: 97.3891% ( 2) 00:18:25.286 8.107 - 8.154: 97.4051% ( 2) 00:18:25.286 8.154 - 8.201: 97.4211% ( 2) 00:18:25.286 8.201 - 8.249: 97.4291% ( 1) 00:18:25.286 8.296 - 8.344: 97.4451% ( 2) 00:18:25.286 8.344 - 8.391: 97.4612% ( 2) 00:18:25.286 8.439 - 8.486: 97.4692% ( 1) 00:18:25.286 8.486 - 8.533: 97.4772% ( 1) 00:18:25.286 8.581 - 8.628: 97.5012% ( 3) 00:18:25.286 8.628 - 8.676: 97.5332% ( 4) 00:18:25.286 8.676 - 8.723: 97.5412% ( 1) 00:18:25.286 8.723 - 8.770: 97.5493% ( 1) 00:18:25.286 8.818 - 8.865: 97.5733% ( 3) 00:18:25.286 8.865 - 8.913: 97.5893% ( 2) 00:18:25.286 8.960 - 9.007: 97.6053% ( 2) 00:18:25.286 9.007 - 9.055: 97.6213% ( 2) 00:18:25.286 9.055 - 9.102: 97.6454% ( 3) 00:18:25.286 9.102 - 9.150: 97.6694% ( 3) 00:18:25.286 9.197 - 9.244: 97.6774% ( 1) 00:18:25.286 9.292 - 9.339: 97.6854% ( 1) 00:18:25.286 9.387 - 9.434: 97.7014% ( 2) 00:18:25.286 9.434 - 9.481: 97.7094% ( 1) 00:18:25.286 9.481 - 9.529: 97.7174% ( 1) 00:18:25.286 9.529 - 9.576: 97.7335% ( 2) 00:18:25.286 9.576 - 9.624: 97.7415% ( 1) 00:18:25.286 9.624 - 9.671: 97.7495% ( 1) 00:18:25.286 9.719 - 9.766: 97.7655% ( 2) 00:18:25.286 9.766 - 9.813: 97.7815% ( 2) 00:18:25.286 9.813 - 9.861: 97.7895% ( 1) 00:18:25.286 9.861 - 9.908: 97.7975% ( 1) 00:18:25.286 9.908 - 9.956: 97.8136% ( 2) 00:18:25.286 9.956 - 10.003: 97.8216% ( 1) 00:18:25.286 10.003 - 10.050: 97.8456% ( 3) 00:18:25.286 10.050 - 10.098: 97.8616% ( 2) 00:18:25.286 10.145 - 10.193: 97.8696% ( 1) 00:18:25.286 10.193 - 10.240: 97.8776% ( 1) 00:18:25.286 10.240 - 10.287: 97.8856% ( 1) 00:18:25.286 10.430 - 10.477: 97.8936% ( 1) 00:18:25.286 10.572 - 10.619: 97.9016% ( 1) 00:18:25.286 10.619 - 10.667: 97.9337% ( 4) 00:18:25.286 10.667 - 10.714: 97.9657% ( 4) 00:18:25.286 10.714 - 10.761: 97.9737% ( 1) 00:18:25.286 10.761 - 10.809: 97.9817% ( 1) 00:18:25.286 10.904 - 10.951: 97.9897% ( 1) 00:18:25.286 10.951 - 10.999: 98.0138% ( 3) 00:18:25.286 10.999 - 11.046: 98.0218% ( 1) 00:18:25.286 11.046 - 11.093: 98.0378% ( 2) 00:18:25.286 11.093 - 11.141: 98.0458% ( 1) 00:18:25.286 11.188 - 11.236: 98.0538% ( 1) 00:18:25.286 11.330 - 11.378: 98.0698% ( 2) 00:18:25.286 11.378 - 11.425: 98.0778% ( 1) 00:18:25.286 11.520 - 11.567: 98.0859% ( 1) 00:18:25.286 11.567 - 11.615: 98.0939% ( 1) 00:18:25.286 11.804 - 11.852: 98.1099% ( 2) 00:18:25.286 11.947 - 11.994: 98.1259% ( 2) 00:18:25.286 11.994 - 12.041: 98.1339% ( 1) 00:18:25.286 12.089 - 12.136: 98.1419% ( 1) 00:18:25.286 12.136 - 12.231: 98.1659% ( 3) 00:18:25.286 12.231 - 12.326: 98.1740% ( 1) 00:18:25.286 12.326 - 12.421: 98.1820% ( 1) 00:18:25.286 12.421 - 12.516: 98.1980% ( 2) 00:18:25.286 12.516 - 12.610: 98.2300% ( 4) 00:18:25.286 12.610 - 12.705: 98.2380% ( 1) 00:18:25.286 12.705 - 12.800: 98.2621% ( 3) 00:18:25.286 12.800 - 12.895: 98.2701% ( 1) 00:18:25.286 12.895 - 12.990: 98.2781% ( 1) 00:18:25.286 12.990 - 13.084: 98.2941% ( 2) 00:18:25.286 13.179 - 13.274: 98.3021% ( 1) 00:18:25.286 13.274 - 13.369: 98.3341% ( 4) 00:18:25.286 13.369 - 13.464: 98.3502% ( 2) 00:18:25.286 13.464 - 13.559: 98.3662% ( 2) 00:18:25.286 13.559 - 13.653: 98.3742% ( 1) 00:18:25.286 13.748 - 13.843: 98.3822% ( 1) 00:18:25.286 13.843 - 13.938: 98.3982% ( 2) 00:18:25.286 13.938 - 14.033: 98.4222% ( 3) 00:18:25.286 14.033 - 14.127: 98.4302% ( 1) 00:18:25.286 14.127 - 14.222: 98.4383% ( 1) 00:18:25.286 14.222 - 14.317: 98.4783% ( 5) 00:18:25.286 14.317 - 14.412: 98.4863% ( 1) 00:18:25.286 14.412 - 14.507: 98.4943% ( 1) 00:18:25.286 14.507 - 14.601: 98.5183% ( 3) 00:18:25.286 14.601 - 14.696: 98.5424% ( 3) 00:18:25.286 14.696 - 14.791: 98.5584% ( 2) 00:18:25.286 14.791 - 14.886: 98.5664% ( 1) 00:18:25.286 14.886 - 14.981: 98.5744% ( 1) 00:18:25.286 14.981 - 15.076: 98.5904% ( 2) 00:18:25.286 15.170 - 15.265: 98.5984% ( 1) 00:18:25.286 15.265 - 15.360: 98.6064% ( 1) 00:18:25.286 15.360 - 15.455: 98.6144% ( 1) 00:18:25.286 15.455 - 15.550: 98.6225% ( 1) 00:18:25.286 16.403 - 16.498: 98.6305% ( 1) 00:18:25.286 17.161 - 17.256: 98.6465% ( 2) 00:18:25.286 17.256 - 17.351: 98.6785% ( 4) 00:18:25.286 17.351 - 17.446: 98.7266% ( 6) 00:18:25.286 17.446 - 17.541: 98.7826% ( 7) 00:18:25.286 17.541 - 17.636: 98.7906% ( 1) 00:18:25.286 17.636 - 17.730: 98.8147% ( 3) 00:18:25.286 17.730 - 17.825: 98.8707% ( 7) 00:18:25.286 17.825 - 17.920: 98.9268% ( 7) 00:18:25.286 17.920 - 18.015: 99.0309% ( 13) 00:18:25.286 18.015 - 18.110: 99.0630% ( 4) 00:18:25.286 18.110 - 18.204: 99.1591% ( 12) 00:18:25.286 18.204 - 18.299: 99.2311% ( 9) 00:18:25.286 18.394 - 18.489: 99.3513% ( 15) 00:18:25.286 18.489 - 18.584: 99.3993% ( 6) 00:18:25.286 18.584 - 18.679: 99.4874% ( 11) 00:18:25.286 18.679 - 18.773: 99.5275% ( 5) 00:18:25.286 18.773 - 18.868: 99.5996% ( 9) 00:18:25.286 18.868 - 18.963: 99.6316% ( 4) 00:18:25.286 18.963 - 19.058: 99.6476% ( 2) 00:18:25.286 19.058 - 19.153: 99.6716% ( 3) 00:18:25.286 19.153 - 19.247: 99.6796% ( 1) 00:18:25.286 19.437 - 19.532: 99.6957% ( 2) 00:18:25.286 19.816 - 19.911: 99.7037% ( 1) 00:18:25.286 20.764 - 20.859: 99.7117% ( 1) 00:18:25.286 20.859 - 20.954: 99.7197% ( 1) 00:18:25.286 21.713 - 21.807: 99.7277% ( 1) 00:18:25.286 21.902 - 21.997: 99.7357% ( 1) 00:18:25.286 22.376 - 22.471: 99.7437% ( 1) 00:18:25.286 23.514 - 23.609: 99.7517% ( 1) 00:18:25.286 23.609 - 23.704: 99.7597% ( 1) 00:18:25.286 23.704 - 23.799: 99.7677% ( 1) 00:18:25.286 24.841 - 25.031: 99.7757% ( 1) 00:18:25.286 25.221 - 25.410: 99.7918% ( 2) 00:18:25.286 26.169 - 26.359: 99.8078% ( 2) 00:18:25.286 26.359 - 26.548: 99.8238% ( 2) 00:18:25.286 26.927 - 27.117: 99.8398% ( 2) 00:18:25.286 27.307 - 27.496: 99.8478% ( 1) 00:18:25.286 27.496 - 27.686: 99.8638% ( 2) 00:18:25.286 27.686 - 27.876: 99.8719% ( 1) 00:18:25.286 27.876 - 28.065: 99.8799% ( 1) 00:18:25.286 28.065 - 28.255: 99.8879% ( 1) 00:18:25.286 28.255 - 28.444: 99.8959% ( 1) 00:18:25.286 28.444 - 28.634: 99.9119% ( 2) 00:18:25.286 3980.705 - 4004.978: 99.9680% ( 7) 00:18:25.286 4004.978 - 4029.250: 99.9840% ( 2) 00:18:25.286 4029.250 - 4053.523: 100.0000% ( 2) 00:18:25.286 00:18:25.287 Complete histogram 00:18:25.287 ================== 00:18:25.287 Range in us Cumulative Count 00:18:25.287 2.074 - 2.086: 0.3444% ( 43) 00:18:25.287 2.086 - 2.098: 15.6015% ( 1905) 00:18:25.287 2.098 - 2.110: 36.1685% ( 2568) 00:18:25.287 2.110 - 2.121: 38.7474% ( 322) 00:18:25.287 2.121 - 2.133: 42.3514% ( 450) 00:18:25.287 2.133 - 2.145: 44.6500% ( 287) 00:18:25.287 2.145 - 2.157: 46.8124% ( 270) 00:18:25.287 2.157 - 2.169: 60.3876% ( 1695) 00:18:25.287 2.169 - 2.181: 68.9012% ( 1063) 00:18:25.287 2.181 - 2.193: 70.6952% ( 224) 00:18:25.287 2.193 - 2.204: 73.2661% ( 321) 00:18:25.287 2.204 - 2.216: 74.9960% ( 216) 00:18:25.287 2.216 - 2.228: 76.1092% ( 139) 00:18:25.287 2.228 - 2.240: 80.9947% ( 610) 00:18:25.287 2.240 - 2.252: 85.8401% ( 605) 00:18:25.287 2.252 - 2.264: 88.2268% ( 298) 00:18:25.287 2.264 - 2.276: 89.8286% ( 200) 00:18:25.287 2.276 - 2.287: 90.8538% ( 128) 00:18:25.287 2.287 - 2.299: 91.2702% ( 52) 00:18:25.287 2.299 - 2.311: 91.8469% ( 72) 00:18:25.287 2.311 - 2.323: 92.5036% ( 82) 00:18:25.287 2.323 - 2.335: 93.6249% ( 140) 00:18:25.287 2.335 - 2.347: 94.3377% ( 89) 00:18:25.287 2.347 - 2.359: 94.5139% ( 22) 00:18:25.287 2.359 - 2.370: 94.5619% ( 6) 00:18:25.287 2.370 - 2.382: 94.6100% ( 6) 00:18:25.287 2.382 - 2.394: 94.7782% ( 21) 00:18:25.287 2.394 - 2.406: 95.1145% ( 42) 00:18:25.287 2.406 - 2.418: 95.5710% ( 57) 00:18:25.287 2.418 - 2.430: 95.7472% ( 22) 00:18:25.287 2.430 - 2.441: 95.9715% ( 28) 00:18:25.287 2.441 - 2.453: 96.1557% ( 23) 00:18:25.287 2.453 - 2.465: 96.3159% ( 20) 00:18:25.287 2.465 - 2.477: 96.5561% ( 30) 00:18:25.287 2.477 - 2.489: 96.7243% ( 21) 00:18:25.287 2.489 - 2.501: 96.8765% ( 19) 00:18:25.287 2.501 - 2.513: 96.9886% ( 14) 00:18:25.287 2.513 - 2.524: 97.1568% ( 21) 00:18:25.287 2.524 - 2.536: 97.2209% ( 8) 00:18:25.287 2.536 - 2.548: 97.3731% ( 19) 00:18:25.287 2.548 - 2.560: 97.4051% ( 4) 00:18:25.287 2.560 - 2.572: 97.4451% ( 5) 00:18:25.287 2.572 - 2.584: 97.4932% ( 6) 00:18:25.287 2.584 - 2.596: 97.5493% ( 7) 00:18:25.287 2.596 - 2.607: 97.5813% ( 4) 00:18:25.287 2.607 - 2.619: 97.5893% ( 1) 00:18:25.287 2.619 - 2.631: 97.6053% ( 2) 00:18:25.287 2.631 - 2.643: 97.6133% ( 1) 00:18:25.287 2.643 - 2.655: 97.6213% ( 1) 00:18:25.287 2.655 - 2.667: 97.6534% ( 4) 00:18:25.287 2.667 - 2.679: 97.6614% ( 1) 00:18:25.287 2.690 - 2.702: 97.6694% ( 1) 00:18:25.287 2.702 - 2.714: 97.6774% ( 1) 00:18:25.287 2.714 - 2.726: 97.6854% ( 1) 00:18:25.287 2.750 - 2.761: 97.7014% ( 2) 00:18:25.287 2.761 - 2.773: 97.7255% ( 3) 00:18:25.287 2.773 - 2.785: 97.7335% ( 1) 00:18:25.287 2.785 - 2.797: 97.7415% ( 1) 00:18:25.287 2.797 - 2.809: 97.7495% ( 1) 00:18:25.287 2.833 - 2.844: 97.7575% ( 1) 00:18:25.287 2.844 - 2.856: 97.7655% ( 1) 00:18:25.287 2.856 - 2.868: 97.7735% ( 1) 00:18:25.287 2.868 - 2.880: 97.7815% ( 1) 00:18:25.287 2.892 - 2.904: 97.7975% ( 2) 00:18:25.287 2.916 - 2.927: 97.8216% ( 3) 00:18:25.287 2.927 - 2.939: 97.8296% ( 1) 00:18:25.287 2.951 - 2.963: 97.8456% ( 2) 00:18:25.287 2.999 - 3.010: 97.8536% ( 1) 00:18:25.287 3.022 - 3.034: 97.8696% ( 2) 00:18:25.287 3.034 - 3.058: 97.8856% ( 2) 00:18:25.287 3.105 - 3.129: 97.8936% ( 1) 00:18:25.287 3.153 - 3.176: 97.9097% ( 2) 00:18:25.287 3.176 - 3.200: 97.9337% ( 3) 00:18:25.287 3.224 - 3.247: 97.9417% ( 1) 00:18:25.287 3.247 - 3.271: 97.9657% ( 3) 00:18:25.287 3.271 - 3.295: 97.9737% ( 1) 00:18:25.287 3.295 - 3.319: 97.9897% ( 2) 00:18:25.287 3.342 - 3.366: 98.0058% ( 2) 00:18:25.287 3.366 - 3.390: 98.0138% ( 1) 00:18:25.287 3.390 - 3.413: 98.0298% ( 2) 00:18:25.287 3.413 - 3.437: 98.0618% ( 4) 00:18:25.287 3.437 - 3.461: 98.0859% ( 3) 00:18:25.287 3.461 - 3.484: 98.1019% ( 2) 00:18:25.287 3.484 - 3.508: 98.1259% ( 3) 00:18:25.287 3.532 - 3.556: 98.1339% ( 1) 00:18:25.287 3.556 - 3.579: 98.1419% ( 1) 00:18:25.287 3.579 - 3.603: 98.1579% ( 2) 00:18:25.287 3.603 - 3.627: 98.1659% ( 1) 00:18:25.287 3.674 - 3.698: 98.1820% ( 2) 00:18:25.287 3.698 - 3.721: 98.1900% ( 1) 00:18:25.287 3.721 - 3.745: 98.1980% ( 1) 00:18:25.287 3.745 - 3.769: 98.2220% ( 3) 00:18:25.287 3.769 - 3.793: 98.2300% ( 1) 00:18:25.287 3.793 - 3.816: 98.2380% ( 1) 00:18:25.287 3.816 - 3.840: 98.2460% ( 1) 00:18:25.287 3.935 - 3.959: 98.2540% ( 1) 00:18:25.287 3.982 - 4.006: 98.2781% ( 3) 00:18:25.287 4.077 - 4.101: 98.2861% ( 1) 00:18:25.287 4.101 - 4.124: 98.2941% ( 1) 00:18:25.287 4.219 - 4.243: 98.3021% ( 1) 00:18:25.287 4.338 - 4.361: 98.3101% ( 1) 00:18:25.287 4.504 - 4.527: 98.3181% ( 1) 00:18:25.287 4.859 - 4.883: 98.3261% ( 1) 00:18:25.287 5.262 - 5.286: 98.3341% ( 1) 00:18:25.287 5.523 - 5.547: 98.3421% ( 1) 00:18:25.287 5.879 - 5.902: 98.3502% ( 1) 00:18:25.287 6.305 - 6.353: 98.3582% ( 1) 00:18:25.287 6.684 - 6.732: 98.3662% ( 1) 00:18:25.287 6.779 - 6.827: 98.3742% ( 1) 00:18:25.287 6.874 - 6.921: 98.3902% ( 2) 00:18:25.287 6.921 - 6.969: 98.3982% ( 1) 00:18:25.287 6.969 - 7.016: 98.4062% ( 1) 00:18:25.287 7.111 - 7.159: 98.4142% ( 1) 00:18:25.287 7.348 - 7.396: 98.4222% ( 1) 00:18:25.287 7.396 - 7.443: 98.4302% ( 1) 00:18:25.287 7.585 - 7.633: 98.4383% ( 1) 00:18:25.287 7.633 - 7.680: 98.4463% ( 1) 00:18:25.287 7.727 - 7.775: 98.4543% ( 1) 00:18:25.287 7.775 - 7.822: 98.4703% ( 2) 00:18:25.287 7.964 - 8.012: 98.4783% ( 1) 00:18:25.287 8.012 - 8.059: 98.4863% ( 1) 00:18:25.287 8.107 - 8.154: 98.4943% ( 1) 00:18:25.287 8.296 - 8.344: 98.5023% ( 1) 00:18:25.287 9.197 - 9.244: 98.5103% ( 1) 00:18:25.287 9.387 - 9.434: 98.5183% ( 1) 00:18:25.287 9.481 - 9.529: 98.5263% ( 1) 00:18:25.287 10.240 - 10.287: 98.5344% ( 1) 00:18:25.287 11.141 - 11.188: 98.5424% ( 1) 00:18:25.287 11.188 - 11.236: 98.5504% ( 1) 00:18:25.287 11.330 - 11.378: 98.5584% ( 1) 00:18:25.287 12.421 - 12.516: 98.5664% ( 1) 00:18:25.287 15.644 - 15.739: 98.5824% ( 2) 00:18:25.287 15.739 - 15.834: 98.6385% ( 7) 00:18:25.287 15.834 - 15.929: 98.6625% ( 3) 00:18:25.287 15.929 - 16.024: 98.7025% ( 5) 00:18:25.287 16.024 - 16.119: 98.7106% ( 1) 00:18:25.287 16.119 - 16.213: 98.7266% ( 2) 00:18:25.287 16.213 - 16.308: 98.7506% ( 3) 00:18:25.287 16.308 - 16.403: 98.8147% ( 8) 00:18:25.287 16.403 - 16.498: 98.8467% ( 4) 00:18:25.287 16.498 - 16.593: 98.9668% ( 15) 00:18:25.287 16.593 - 16.687: 99.0389% ( 9) 00:18:25.287 16.687 - 16.782: 99.0790% ( 5) 00:18:25.287 16.782 - 16.877: 99.1591% ( 10) 00:18:25.287 16.877 - 16.972: 99.2231% ( 8) 00:18:25.287 17.067 - 17.161: 99.2391% ( 2) 00:18:25.287 17.161 - 17.256: 99.2712% ( 4) 00:18:25.287 17.256 - 17.351: 99.2872% ( 2) 00:18:25.287 17.351 - 17.446: 99.3032% ( 2) 00:18:25.287 17.446 - 17.541: 99.3112% ( 1) 00:18:25.287 17.541 - 17.636: 99.3353% ( 3) 00:18:25.287 17.636 - 17.730: 99.3433% ( 1) 00:18:25.287 17.730 - 17.825: 99.3513% ( 1) 00:18:25.287 18.110 - 18.204: 99.3593% ( 1) 00:18:25.287 18.204 - 18.299: 99.3673% ( 1) 00:18:25.287 18.489 - 18.584: 99.3753% ( 1) 00:18:25.287 18.679 - 18.773: 99.3833% ( 1) 00:18:25.287 19.342 - 19.437: 99.3913% ( 1) 00:18:25.287 20.670 - 20.764: 99.3993% ( 1) 00:18:25.287 39.822 - 40.012: 99.4073% ( 1) 00:18:25.287 102.400 - 103.159: 99.4153% ( 1) 00:18:25.287 3980.705 - 4004.978: 99.9039% ( 61) 00:18:25.287 4004.978 - 4029.250: 100.0000% ( [2024-10-13 01:29:10.746916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:25.287 12) 00:18:25.287 00:18:25.287 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:25.287 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:25.287 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:25.287 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:25.287 01:29:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:25.545 [ 00:18:25.545 { 00:18:25.545 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:25.545 "subtype": "Discovery", 00:18:25.545 "listen_addresses": [], 00:18:25.545 "allow_any_host": true, 00:18:25.545 "hosts": [] 00:18:25.545 }, 00:18:25.545 { 00:18:25.545 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:25.545 "subtype": "NVMe", 00:18:25.545 "listen_addresses": [ 00:18:25.545 { 00:18:25.545 "trtype": "VFIOUSER", 00:18:25.545 "adrfam": "IPv4", 00:18:25.545 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:25.545 "trsvcid": "0" 00:18:25.545 } 00:18:25.545 ], 00:18:25.545 "allow_any_host": true, 00:18:25.545 "hosts": [], 00:18:25.545 "serial_number": "SPDK1", 00:18:25.545 "model_number": "SPDK bdev Controller", 00:18:25.545 "max_namespaces": 32, 00:18:25.545 "min_cntlid": 1, 00:18:25.545 "max_cntlid": 65519, 00:18:25.545 "namespaces": [ 00:18:25.545 { 00:18:25.545 "nsid": 1, 00:18:25.545 "bdev_name": "Malloc1", 00:18:25.545 "name": "Malloc1", 00:18:25.545 "nguid": "B62CEC0E16A34DAFAF958B0C29BC506E", 00:18:25.545 "uuid": "b62cec0e-16a3-4daf-af95-8b0c29bc506e" 00:18:25.545 } 00:18:25.545 ] 00:18:25.545 }, 00:18:25.545 { 00:18:25.545 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:25.545 "subtype": "NVMe", 00:18:25.545 "listen_addresses": [ 00:18:25.545 { 00:18:25.545 "trtype": "VFIOUSER", 00:18:25.545 "adrfam": "IPv4", 00:18:25.545 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:25.545 "trsvcid": "0" 00:18:25.545 } 00:18:25.545 ], 00:18:25.546 "allow_any_host": true, 00:18:25.546 "hosts": [], 00:18:25.546 "serial_number": "SPDK2", 00:18:25.546 "model_number": "SPDK bdev Controller", 00:18:25.546 "max_namespaces": 32, 00:18:25.546 "min_cntlid": 1, 00:18:25.546 "max_cntlid": 65519, 00:18:25.546 "namespaces": [ 00:18:25.546 { 00:18:25.546 "nsid": 1, 00:18:25.546 "bdev_name": "Malloc2", 00:18:25.546 "name": "Malloc2", 00:18:25.546 "nguid": "EB5E63C8EB934AF49721969D0501741D", 00:18:25.546 "uuid": "eb5e63c8-eb93-4af4-9721-969d0501741d" 00:18:25.546 } 00:18:25.546 ] 00:18:25.546 } 00:18:25.546 ] 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1593356 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:25.803 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:25.803 [2024-10-13 01:29:11.286971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.061 Malloc3 00:18:26.061 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:26.319 [2024-10-13 01:29:11.692008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.319 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:26.319 Asynchronous Event Request test 00:18:26.319 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:26.319 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:26.319 Registering asynchronous event callbacks... 00:18:26.319 Starting namespace attribute notice tests for all controllers... 00:18:26.319 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:26.319 aer_cb - Changed Namespace 00:18:26.319 Cleaning up... 00:18:26.578 [ 00:18:26.578 { 00:18:26.578 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:26.578 "subtype": "Discovery", 00:18:26.578 "listen_addresses": [], 00:18:26.578 "allow_any_host": true, 00:18:26.578 "hosts": [] 00:18:26.578 }, 00:18:26.578 { 00:18:26.578 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:26.578 "subtype": "NVMe", 00:18:26.578 "listen_addresses": [ 00:18:26.578 { 00:18:26.578 "trtype": "VFIOUSER", 00:18:26.578 "adrfam": "IPv4", 00:18:26.578 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:26.578 "trsvcid": "0" 00:18:26.578 } 00:18:26.578 ], 00:18:26.578 "allow_any_host": true, 00:18:26.578 "hosts": [], 00:18:26.578 "serial_number": "SPDK1", 00:18:26.578 "model_number": "SPDK bdev Controller", 00:18:26.578 "max_namespaces": 32, 00:18:26.578 "min_cntlid": 1, 00:18:26.578 "max_cntlid": 65519, 00:18:26.578 "namespaces": [ 00:18:26.578 { 00:18:26.578 "nsid": 1, 00:18:26.578 "bdev_name": "Malloc1", 00:18:26.578 "name": "Malloc1", 00:18:26.578 "nguid": "B62CEC0E16A34DAFAF958B0C29BC506E", 00:18:26.578 "uuid": "b62cec0e-16a3-4daf-af95-8b0c29bc506e" 00:18:26.578 }, 00:18:26.578 { 00:18:26.578 "nsid": 2, 00:18:26.578 "bdev_name": "Malloc3", 00:18:26.578 "name": "Malloc3", 00:18:26.578 "nguid": "379AB7E24EBD42F08AEAED55BD8BE65C", 00:18:26.578 "uuid": "379ab7e2-4ebd-42f0-8aea-ed55bd8be65c" 00:18:26.578 } 00:18:26.578 ] 00:18:26.578 }, 00:18:26.578 { 00:18:26.578 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:26.578 "subtype": "NVMe", 00:18:26.578 "listen_addresses": [ 00:18:26.578 { 00:18:26.578 "trtype": "VFIOUSER", 00:18:26.578 "adrfam": "IPv4", 00:18:26.578 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:26.578 "trsvcid": "0" 00:18:26.578 } 00:18:26.578 ], 00:18:26.578 "allow_any_host": true, 00:18:26.578 "hosts": [], 00:18:26.578 "serial_number": "SPDK2", 00:18:26.578 "model_number": "SPDK bdev Controller", 00:18:26.578 "max_namespaces": 32, 00:18:26.578 "min_cntlid": 1, 00:18:26.578 "max_cntlid": 65519, 00:18:26.578 "namespaces": [ 00:18:26.578 { 00:18:26.578 "nsid": 1, 00:18:26.578 "bdev_name": "Malloc2", 00:18:26.578 "name": "Malloc2", 00:18:26.578 "nguid": "EB5E63C8EB934AF49721969D0501741D", 00:18:26.578 "uuid": "eb5e63c8-eb93-4af4-9721-969d0501741d" 00:18:26.578 } 00:18:26.578 ] 00:18:26.578 } 00:18:26.578 ] 00:18:26.578 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1593356 00:18:26.578 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:26.578 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:26.578 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:26.578 01:29:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:26.578 [2024-10-13 01:29:11.994993] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:18:26.578 [2024-10-13 01:29:11.995032] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1593377 ] 00:18:26.578 [2024-10-13 01:29:12.028552] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:26.578 [2024-10-13 01:29:12.036804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:26.578 [2024-10-13 01:29:12.036852] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fca01688000 00:18:26.578 [2024-10-13 01:29:12.037799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.578 [2024-10-13 01:29:12.038794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.578 [2024-10-13 01:29:12.039798] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.578 [2024-10-13 01:29:12.040802] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:26.578 [2024-10-13 01:29:12.041804] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:26.578 [2024-10-13 01:29:12.042813] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.578 [2024-10-13 01:29:12.043823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:26.578 [2024-10-13 01:29:12.044828] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.578 [2024-10-13 01:29:12.045850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:26.578 [2024-10-13 01:29:12.045886] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fca00380000 00:18:26.578 [2024-10-13 01:29:12.047007] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:26.578 [2024-10-13 01:29:12.063699] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:26.578 [2024-10-13 01:29:12.063737] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:18:26.578 [2024-10-13 01:29:12.065822] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:26.578 [2024-10-13 01:29:12.065885] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:26.579 [2024-10-13 01:29:12.065976] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:18:26.579 [2024-10-13 01:29:12.066000] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:18:26.579 [2024-10-13 01:29:12.066010] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:18:26.579 [2024-10-13 01:29:12.066831] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:26.579 [2024-10-13 01:29:12.066863] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:18:26.579 [2024-10-13 01:29:12.066875] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:18:26.579 [2024-10-13 01:29:12.067832] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:26.579 [2024-10-13 01:29:12.067852] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:18:26.579 [2024-10-13 01:29:12.067865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:18:26.579 [2024-10-13 01:29:12.068838] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:26.579 [2024-10-13 01:29:12.068858] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:26.579 [2024-10-13 01:29:12.069839] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:26.579 [2024-10-13 01:29:12.069866] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:18:26.579 [2024-10-13 01:29:12.069875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:18:26.579 [2024-10-13 01:29:12.069886] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:26.579 [2024-10-13 01:29:12.069995] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:18:26.579 [2024-10-13 01:29:12.070003] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:26.579 [2024-10-13 01:29:12.070011] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:26.579 [2024-10-13 01:29:12.070856] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:26.579 [2024-10-13 01:29:12.071855] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:26.579 [2024-10-13 01:29:12.072869] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:26.579 [2024-10-13 01:29:12.073859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:26.579 [2024-10-13 01:29:12.073938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:26.579 [2024-10-13 01:29:12.074901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:26.579 [2024-10-13 01:29:12.074922] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:26.579 [2024-10-13 01:29:12.074931] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.074954] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:18:26.579 [2024-10-13 01:29:12.074968] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.074992] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:26.579 [2024-10-13 01:29:12.075001] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.579 [2024-10-13 01:29:12.075008] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.579 [2024-10-13 01:29:12.075026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.579 [2024-10-13 01:29:12.081486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:26.579 [2024-10-13 01:29:12.081510] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:18:26.579 [2024-10-13 01:29:12.081524] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:18:26.579 [2024-10-13 01:29:12.081531] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:18:26.579 [2024-10-13 01:29:12.081539] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:26.579 [2024-10-13 01:29:12.081547] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:18:26.579 [2024-10-13 01:29:12.081554] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:18:26.579 [2024-10-13 01:29:12.081562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.081575] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.081591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:26.579 [2024-10-13 01:29:12.089483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:26.579 [2024-10-13 01:29:12.089508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.579 [2024-10-13 01:29:12.089521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.579 [2024-10-13 01:29:12.089533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.579 [2024-10-13 01:29:12.089544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.579 [2024-10-13 01:29:12.089553] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.089570] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.089585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:26.579 [2024-10-13 01:29:12.097489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:26.579 [2024-10-13 01:29:12.097508] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:18:26.579 [2024-10-13 01:29:12.097517] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.097528] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.097542] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.097557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:26.579 [2024-10-13 01:29:12.105495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:26.579 [2024-10-13 01:29:12.105570] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.105587] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.105604] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:26.579 [2024-10-13 01:29:12.105613] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:26.579 [2024-10-13 01:29:12.105619] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.579 [2024-10-13 01:29:12.105628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:26.579 [2024-10-13 01:29:12.113485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:26.579 [2024-10-13 01:29:12.113508] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:18:26.579 [2024-10-13 01:29:12.113531] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.113547] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.113559] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:26.579 [2024-10-13 01:29:12.113567] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.579 [2024-10-13 01:29:12.113573] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.579 [2024-10-13 01:29:12.113583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.579 [2024-10-13 01:29:12.121481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:26.579 [2024-10-13 01:29:12.121509] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.121526] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.121539] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:26.579 [2024-10-13 01:29:12.121547] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.579 [2024-10-13 01:29:12.121553] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.579 [2024-10-13 01:29:12.121562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.579 [2024-10-13 01:29:12.129483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:26.579 [2024-10-13 01:29:12.129505] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.129517] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.129531] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:18:26.579 [2024-10-13 01:29:12.129543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:18:26.580 [2024-10-13 01:29:12.129551] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:26.580 [2024-10-13 01:29:12.129560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:18:26.580 [2024-10-13 01:29:12.129572] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:18:26.580 [2024-10-13 01:29:12.129580] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:18:26.580 [2024-10-13 01:29:12.129588] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:18:26.580 [2024-10-13 01:29:12.129613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:26.580 [2024-10-13 01:29:12.137484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:26.580 [2024-10-13 01:29:12.137509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:26.580 [2024-10-13 01:29:12.145483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:26.580 [2024-10-13 01:29:12.145507] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:26.580 [2024-10-13 01:29:12.153501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:26.580 [2024-10-13 01:29:12.153531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:26.838 [2024-10-13 01:29:12.161491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:26.838 [2024-10-13 01:29:12.161529] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:26.838 [2024-10-13 01:29:12.161542] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:26.838 [2024-10-13 01:29:12.161548] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:26.838 [2024-10-13 01:29:12.161555] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:26.838 [2024-10-13 01:29:12.161561] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:26.838 [2024-10-13 01:29:12.161571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:26.838 [2024-10-13 01:29:12.161583] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:26.838 [2024-10-13 01:29:12.161592] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:26.838 [2024-10-13 01:29:12.161598] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.839 [2024-10-13 01:29:12.161606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:26.839 [2024-10-13 01:29:12.161618] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:26.839 [2024-10-13 01:29:12.161626] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.839 [2024-10-13 01:29:12.161631] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.839 [2024-10-13 01:29:12.161640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.839 [2024-10-13 01:29:12.161652] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:26.839 [2024-10-13 01:29:12.161661] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:26.839 [2024-10-13 01:29:12.161667] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.839 [2024-10-13 01:29:12.161680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:26.839 [2024-10-13 01:29:12.169485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:26.839 [2024-10-13 01:29:12.169521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:26.839 [2024-10-13 01:29:12.169538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:26.839 [2024-10-13 01:29:12.169550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:26.839 ===================================================== 00:18:26.839 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:26.839 ===================================================== 00:18:26.839 Controller Capabilities/Features 00:18:26.839 ================================ 00:18:26.839 Vendor ID: 4e58 00:18:26.839 Subsystem Vendor ID: 4e58 00:18:26.839 Serial Number: SPDK2 00:18:26.839 Model Number: SPDK bdev Controller 00:18:26.839 Firmware Version: 25.01 00:18:26.839 Recommended Arb Burst: 6 00:18:26.839 IEEE OUI Identifier: 8d 6b 50 00:18:26.839 Multi-path I/O 00:18:26.839 May have multiple subsystem ports: Yes 00:18:26.839 May have multiple controllers: Yes 00:18:26.839 Associated with SR-IOV VF: No 00:18:26.839 Max Data Transfer Size: 131072 00:18:26.839 Max Number of Namespaces: 32 00:18:26.839 Max Number of I/O Queues: 127 00:18:26.839 NVMe Specification Version (VS): 1.3 00:18:26.839 NVMe Specification Version (Identify): 1.3 00:18:26.839 Maximum Queue Entries: 256 00:18:26.839 Contiguous Queues Required: Yes 00:18:26.839 Arbitration Mechanisms Supported 00:18:26.839 Weighted Round Robin: Not Supported 00:18:26.839 Vendor Specific: Not Supported 00:18:26.839 Reset Timeout: 15000 ms 00:18:26.839 Doorbell Stride: 4 bytes 00:18:26.839 NVM Subsystem Reset: Not Supported 00:18:26.839 Command Sets Supported 00:18:26.839 NVM Command Set: Supported 00:18:26.839 Boot Partition: Not Supported 00:18:26.839 Memory Page Size Minimum: 4096 bytes 00:18:26.839 Memory Page Size Maximum: 4096 bytes 00:18:26.839 Persistent Memory Region: Not Supported 00:18:26.839 Optional Asynchronous Events Supported 00:18:26.839 Namespace Attribute Notices: Supported 00:18:26.839 Firmware Activation Notices: Not Supported 00:18:26.839 ANA Change Notices: Not Supported 00:18:26.839 PLE Aggregate Log Change Notices: Not Supported 00:18:26.839 LBA Status Info Alert Notices: Not Supported 00:18:26.839 EGE Aggregate Log Change Notices: Not Supported 00:18:26.839 Normal NVM Subsystem Shutdown event: Not Supported 00:18:26.839 Zone Descriptor Change Notices: Not Supported 00:18:26.839 Discovery Log Change Notices: Not Supported 00:18:26.839 Controller Attributes 00:18:26.839 128-bit Host Identifier: Supported 00:18:26.839 Non-Operational Permissive Mode: Not Supported 00:18:26.839 NVM Sets: Not Supported 00:18:26.839 Read Recovery Levels: Not Supported 00:18:26.839 Endurance Groups: Not Supported 00:18:26.839 Predictable Latency Mode: Not Supported 00:18:26.839 Traffic Based Keep ALive: Not Supported 00:18:26.839 Namespace Granularity: Not Supported 00:18:26.839 SQ Associations: Not Supported 00:18:26.839 UUID List: Not Supported 00:18:26.839 Multi-Domain Subsystem: Not Supported 00:18:26.839 Fixed Capacity Management: Not Supported 00:18:26.839 Variable Capacity Management: Not Supported 00:18:26.839 Delete Endurance Group: Not Supported 00:18:26.839 Delete NVM Set: Not Supported 00:18:26.839 Extended LBA Formats Supported: Not Supported 00:18:26.839 Flexible Data Placement Supported: Not Supported 00:18:26.839 00:18:26.839 Controller Memory Buffer Support 00:18:26.839 ================================ 00:18:26.839 Supported: No 00:18:26.839 00:18:26.839 Persistent Memory Region Support 00:18:26.839 ================================ 00:18:26.839 Supported: No 00:18:26.839 00:18:26.839 Admin Command Set Attributes 00:18:26.839 ============================ 00:18:26.839 Security Send/Receive: Not Supported 00:18:26.839 Format NVM: Not Supported 00:18:26.839 Firmware Activate/Download: Not Supported 00:18:26.839 Namespace Management: Not Supported 00:18:26.839 Device Self-Test: Not Supported 00:18:26.839 Directives: Not Supported 00:18:26.839 NVMe-MI: Not Supported 00:18:26.839 Virtualization Management: Not Supported 00:18:26.839 Doorbell Buffer Config: Not Supported 00:18:26.839 Get LBA Status Capability: Not Supported 00:18:26.839 Command & Feature Lockdown Capability: Not Supported 00:18:26.839 Abort Command Limit: 4 00:18:26.839 Async Event Request Limit: 4 00:18:26.839 Number of Firmware Slots: N/A 00:18:26.839 Firmware Slot 1 Read-Only: N/A 00:18:26.839 Firmware Activation Without Reset: N/A 00:18:26.839 Multiple Update Detection Support: N/A 00:18:26.839 Firmware Update Granularity: No Information Provided 00:18:26.839 Per-Namespace SMART Log: No 00:18:26.839 Asymmetric Namespace Access Log Page: Not Supported 00:18:26.839 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:26.839 Command Effects Log Page: Supported 00:18:26.839 Get Log Page Extended Data: Supported 00:18:26.839 Telemetry Log Pages: Not Supported 00:18:26.839 Persistent Event Log Pages: Not Supported 00:18:26.839 Supported Log Pages Log Page: May Support 00:18:26.839 Commands Supported & Effects Log Page: Not Supported 00:18:26.839 Feature Identifiers & Effects Log Page:May Support 00:18:26.839 NVMe-MI Commands & Effects Log Page: May Support 00:18:26.839 Data Area 4 for Telemetry Log: Not Supported 00:18:26.839 Error Log Page Entries Supported: 128 00:18:26.839 Keep Alive: Supported 00:18:26.839 Keep Alive Granularity: 10000 ms 00:18:26.839 00:18:26.839 NVM Command Set Attributes 00:18:26.839 ========================== 00:18:26.839 Submission Queue Entry Size 00:18:26.839 Max: 64 00:18:26.839 Min: 64 00:18:26.839 Completion Queue Entry Size 00:18:26.839 Max: 16 00:18:26.839 Min: 16 00:18:26.839 Number of Namespaces: 32 00:18:26.839 Compare Command: Supported 00:18:26.839 Write Uncorrectable Command: Not Supported 00:18:26.839 Dataset Management Command: Supported 00:18:26.839 Write Zeroes Command: Supported 00:18:26.839 Set Features Save Field: Not Supported 00:18:26.839 Reservations: Not Supported 00:18:26.839 Timestamp: Not Supported 00:18:26.839 Copy: Supported 00:18:26.839 Volatile Write Cache: Present 00:18:26.839 Atomic Write Unit (Normal): 1 00:18:26.839 Atomic Write Unit (PFail): 1 00:18:26.839 Atomic Compare & Write Unit: 1 00:18:26.839 Fused Compare & Write: Supported 00:18:26.839 Scatter-Gather List 00:18:26.839 SGL Command Set: Supported (Dword aligned) 00:18:26.839 SGL Keyed: Not Supported 00:18:26.839 SGL Bit Bucket Descriptor: Not Supported 00:18:26.839 SGL Metadata Pointer: Not Supported 00:18:26.839 Oversized SGL: Not Supported 00:18:26.839 SGL Metadata Address: Not Supported 00:18:26.839 SGL Offset: Not Supported 00:18:26.839 Transport SGL Data Block: Not Supported 00:18:26.839 Replay Protected Memory Block: Not Supported 00:18:26.839 00:18:26.839 Firmware Slot Information 00:18:26.839 ========================= 00:18:26.839 Active slot: 1 00:18:26.839 Slot 1 Firmware Revision: 25.01 00:18:26.839 00:18:26.839 00:18:26.839 Commands Supported and Effects 00:18:26.839 ============================== 00:18:26.839 Admin Commands 00:18:26.839 -------------- 00:18:26.839 Get Log Page (02h): Supported 00:18:26.839 Identify (06h): Supported 00:18:26.839 Abort (08h): Supported 00:18:26.839 Set Features (09h): Supported 00:18:26.839 Get Features (0Ah): Supported 00:18:26.839 Asynchronous Event Request (0Ch): Supported 00:18:26.839 Keep Alive (18h): Supported 00:18:26.839 I/O Commands 00:18:26.839 ------------ 00:18:26.839 Flush (00h): Supported LBA-Change 00:18:26.839 Write (01h): Supported LBA-Change 00:18:26.839 Read (02h): Supported 00:18:26.839 Compare (05h): Supported 00:18:26.839 Write Zeroes (08h): Supported LBA-Change 00:18:26.839 Dataset Management (09h): Supported LBA-Change 00:18:26.839 Copy (19h): Supported LBA-Change 00:18:26.839 00:18:26.839 Error Log 00:18:26.839 ========= 00:18:26.839 00:18:26.839 Arbitration 00:18:26.839 =========== 00:18:26.839 Arbitration Burst: 1 00:18:26.839 00:18:26.839 Power Management 00:18:26.839 ================ 00:18:26.839 Number of Power States: 1 00:18:26.839 Current Power State: Power State #0 00:18:26.839 Power State #0: 00:18:26.839 Max Power: 0.00 W 00:18:26.839 Non-Operational State: Operational 00:18:26.840 Entry Latency: Not Reported 00:18:26.840 Exit Latency: Not Reported 00:18:26.840 Relative Read Throughput: 0 00:18:26.840 Relative Read Latency: 0 00:18:26.840 Relative Write Throughput: 0 00:18:26.840 Relative Write Latency: 0 00:18:26.840 Idle Power: Not Reported 00:18:26.840 Active Power: Not Reported 00:18:26.840 Non-Operational Permissive Mode: Not Supported 00:18:26.840 00:18:26.840 Health Information 00:18:26.840 ================== 00:18:26.840 Critical Warnings: 00:18:26.840 Available Spare Space: OK 00:18:26.840 Temperature: OK 00:18:26.840 Device Reliability: OK 00:18:26.840 Read Only: No 00:18:26.840 Volatile Memory Backup: OK 00:18:26.840 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:26.840 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:26.840 Available Spare: 0% 00:18:26.840 Available Sp[2024-10-13 01:29:12.169673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:26.840 [2024-10-13 01:29:12.177498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:26.840 [2024-10-13 01:29:12.177549] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:18:26.840 [2024-10-13 01:29:12.177567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.840 [2024-10-13 01:29:12.177579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.840 [2024-10-13 01:29:12.177588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.840 [2024-10-13 01:29:12.177597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.840 [2024-10-13 01:29:12.181483] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:26.840 [2024-10-13 01:29:12.181507] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:26.840 [2024-10-13 01:29:12.181703] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:26.840 [2024-10-13 01:29:12.181791] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:18:26.840 [2024-10-13 01:29:12.181806] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:18:26.840 [2024-10-13 01:29:12.182708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:26.840 [2024-10-13 01:29:12.182732] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:18:26.840 [2024-10-13 01:29:12.182789] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:26.840 [2024-10-13 01:29:12.183977] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:26.840 are Threshold: 0% 00:18:26.840 Life Percentage Used: 0% 00:18:26.840 Data Units Read: 0 00:18:26.840 Data Units Written: 0 00:18:26.840 Host Read Commands: 0 00:18:26.840 Host Write Commands: 0 00:18:26.840 Controller Busy Time: 0 minutes 00:18:26.840 Power Cycles: 0 00:18:26.840 Power On Hours: 0 hours 00:18:26.840 Unsafe Shutdowns: 0 00:18:26.840 Unrecoverable Media Errors: 0 00:18:26.840 Lifetime Error Log Entries: 0 00:18:26.840 Warning Temperature Time: 0 minutes 00:18:26.840 Critical Temperature Time: 0 minutes 00:18:26.840 00:18:26.840 Number of Queues 00:18:26.840 ================ 00:18:26.840 Number of I/O Submission Queues: 127 00:18:26.840 Number of I/O Completion Queues: 127 00:18:26.840 00:18:26.840 Active Namespaces 00:18:26.840 ================= 00:18:26.840 Namespace ID:1 00:18:26.840 Error Recovery Timeout: Unlimited 00:18:26.840 Command Set Identifier: NVM (00h) 00:18:26.840 Deallocate: Supported 00:18:26.840 Deallocated/Unwritten Error: Not Supported 00:18:26.840 Deallocated Read Value: Unknown 00:18:26.840 Deallocate in Write Zeroes: Not Supported 00:18:26.840 Deallocated Guard Field: 0xFFFF 00:18:26.840 Flush: Supported 00:18:26.840 Reservation: Supported 00:18:26.840 Namespace Sharing Capabilities: Multiple Controllers 00:18:26.840 Size (in LBAs): 131072 (0GiB) 00:18:26.840 Capacity (in LBAs): 131072 (0GiB) 00:18:26.840 Utilization (in LBAs): 131072 (0GiB) 00:18:26.840 NGUID: EB5E63C8EB934AF49721969D0501741D 00:18:26.840 UUID: eb5e63c8-eb93-4af4-9721-969d0501741d 00:18:26.840 Thin Provisioning: Not Supported 00:18:26.840 Per-NS Atomic Units: Yes 00:18:26.840 Atomic Boundary Size (Normal): 0 00:18:26.840 Atomic Boundary Size (PFail): 0 00:18:26.840 Atomic Boundary Offset: 0 00:18:26.840 Maximum Single Source Range Length: 65535 00:18:26.840 Maximum Copy Length: 65535 00:18:26.840 Maximum Source Range Count: 1 00:18:26.840 NGUID/EUI64 Never Reused: No 00:18:26.840 Namespace Write Protected: No 00:18:26.840 Number of LBA Formats: 1 00:18:26.840 Current LBA Format: LBA Format #00 00:18:26.840 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:26.840 00:18:26.840 01:29:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:26.840 [2024-10-13 01:29:12.411543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:32.103 Initializing NVMe Controllers 00:18:32.103 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:32.103 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:32.103 Initialization complete. Launching workers. 00:18:32.103 ======================================================== 00:18:32.103 Latency(us) 00:18:32.103 Device Information : IOPS MiB/s Average min max 00:18:32.103 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34605.32 135.18 3697.70 1146.96 7399.02 00:18:32.103 ======================================================== 00:18:32.103 Total : 34605.32 135.18 3697.70 1146.96 7399.02 00:18:32.103 00:18:32.103 [2024-10-13 01:29:17.516841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:32.103 01:29:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:32.361 [2024-10-13 01:29:17.762532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:37.637 Initializing NVMe Controllers 00:18:37.637 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:37.637 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:37.637 Initialization complete. Launching workers. 00:18:37.637 ======================================================== 00:18:37.637 Latency(us) 00:18:37.637 Device Information : IOPS MiB/s Average min max 00:18:37.637 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31395.20 122.64 4076.82 1206.40 8995.33 00:18:37.637 ======================================================== 00:18:37.637 Total : 31395.20 122.64 4076.82 1206.40 8995.33 00:18:37.637 00:18:37.637 [2024-10-13 01:29:22.783686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:37.637 01:29:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:37.637 [2024-10-13 01:29:22.985503] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:42.962 [2024-10-13 01:29:28.126617] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:42.962 Initializing NVMe Controllers 00:18:42.962 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:42.962 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:42.962 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:42.962 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:42.962 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:42.962 Initialization complete. Launching workers. 00:18:42.962 Starting thread on core 2 00:18:42.962 Starting thread on core 3 00:18:42.962 Starting thread on core 1 00:18:42.962 01:29:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:42.962 [2024-10-13 01:29:28.438007] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:46.242 [2024-10-13 01:29:31.501813] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:46.242 Initializing NVMe Controllers 00:18:46.242 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:46.242 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:46.242 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:46.242 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:46.242 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:46.242 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:46.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:46.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:46.242 Initialization complete. Launching workers. 00:18:46.242 Starting thread on core 1 with urgent priority queue 00:18:46.242 Starting thread on core 2 with urgent priority queue 00:18:46.242 Starting thread on core 3 with urgent priority queue 00:18:46.242 Starting thread on core 0 with urgent priority queue 00:18:46.242 SPDK bdev Controller (SPDK2 ) core 0: 3126.00 IO/s 31.99 secs/100000 ios 00:18:46.242 SPDK bdev Controller (SPDK2 ) core 1: 3074.67 IO/s 32.52 secs/100000 ios 00:18:46.242 SPDK bdev Controller (SPDK2 ) core 2: 3282.33 IO/s 30.47 secs/100000 ios 00:18:46.242 SPDK bdev Controller (SPDK2 ) core 3: 2417.33 IO/s 41.37 secs/100000 ios 00:18:46.242 ======================================================== 00:18:46.242 00:18:46.242 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:46.242 [2024-10-13 01:29:31.800983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:46.242 Initializing NVMe Controllers 00:18:46.242 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:46.242 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:46.242 Namespace ID: 1 size: 0GB 00:18:46.242 Initialization complete. 00:18:46.242 INFO: using host memory buffer for IO 00:18:46.242 Hello world! 00:18:46.242 [2024-10-13 01:29:31.811052] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:46.500 01:29:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:46.758 [2024-10-13 01:29:32.107307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:47.692 Initializing NVMe Controllers 00:18:47.692 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:47.692 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:47.692 Initialization complete. Launching workers. 00:18:47.692 submit (in ns) avg, min, max = 7438.6, 3492.2, 4015644.4 00:18:47.692 complete (in ns) avg, min, max = 26113.7, 2043.3, 4074433.3 00:18:47.692 00:18:47.692 Submit histogram 00:18:47.692 ================ 00:18:47.692 Range in us Cumulative Count 00:18:47.692 3.484 - 3.508: 0.2017% ( 26) 00:18:47.692 3.508 - 3.532: 0.8222% ( 80) 00:18:47.692 3.532 - 3.556: 2.6681% ( 238) 00:18:47.692 3.556 - 3.579: 6.8021% ( 533) 00:18:47.692 3.579 - 3.603: 14.0464% ( 934) 00:18:47.692 3.603 - 3.627: 23.0047% ( 1155) 00:18:47.692 3.627 - 3.650: 32.7232% ( 1253) 00:18:47.692 3.650 - 3.674: 40.3863% ( 988) 00:18:47.692 3.674 - 3.698: 47.2427% ( 884) 00:18:47.692 3.698 - 3.721: 54.0060% ( 872) 00:18:47.692 3.721 - 3.745: 58.9157% ( 633) 00:18:47.692 3.745 - 3.769: 63.3522% ( 572) 00:18:47.692 3.769 - 3.793: 66.9045% ( 458) 00:18:47.692 3.793 - 3.816: 70.2086% ( 426) 00:18:47.692 3.816 - 3.840: 73.3576% ( 406) 00:18:47.692 3.840 - 3.864: 77.2435% ( 501) 00:18:47.692 3.864 - 3.887: 80.7958% ( 458) 00:18:47.692 3.887 - 3.911: 83.7043% ( 375) 00:18:47.692 3.911 - 3.935: 85.6589% ( 252) 00:18:47.692 3.935 - 3.959: 87.5747% ( 247) 00:18:47.692 3.959 - 3.982: 89.1957% ( 209) 00:18:47.692 3.982 - 4.006: 90.7547% ( 201) 00:18:47.692 4.006 - 4.030: 91.8405% ( 140) 00:18:47.692 4.030 - 4.053: 92.7325% ( 115) 00:18:47.692 4.053 - 4.077: 93.3685% ( 82) 00:18:47.692 4.077 - 4.101: 94.0821% ( 92) 00:18:47.692 4.101 - 4.124: 94.5785% ( 64) 00:18:47.692 4.124 - 4.148: 95.0206% ( 57) 00:18:47.692 4.148 - 4.172: 95.3075% ( 37) 00:18:47.692 4.172 - 4.196: 95.5402% ( 30) 00:18:47.692 4.196 - 4.219: 95.8039% ( 34) 00:18:47.692 4.219 - 4.243: 95.9203% ( 15) 00:18:47.692 4.243 - 4.267: 96.0444% ( 16) 00:18:47.692 4.267 - 4.290: 96.1530% ( 14) 00:18:47.692 4.290 - 4.314: 96.2150% ( 8) 00:18:47.692 4.314 - 4.338: 96.3624% ( 19) 00:18:47.692 4.338 - 4.361: 96.4322% ( 9) 00:18:47.692 4.361 - 4.385: 96.5097% ( 10) 00:18:47.692 4.385 - 4.409: 96.5795% ( 9) 00:18:47.692 4.409 - 4.433: 96.5951% ( 2) 00:18:47.692 4.433 - 4.456: 96.6338% ( 5) 00:18:47.692 4.456 - 4.480: 96.6493% ( 2) 00:18:47.692 4.480 - 4.504: 96.6726% ( 3) 00:18:47.692 4.504 - 4.527: 96.6881% ( 2) 00:18:47.692 4.551 - 4.575: 96.6959% ( 1) 00:18:47.692 4.575 - 4.599: 96.7036% ( 1) 00:18:47.692 4.599 - 4.622: 96.7347% ( 4) 00:18:47.692 4.670 - 4.693: 96.7424% ( 1) 00:18:47.692 4.693 - 4.717: 96.7579% ( 2) 00:18:47.692 4.717 - 4.741: 96.7657% ( 1) 00:18:47.692 4.741 - 4.764: 96.7967% ( 4) 00:18:47.692 4.764 - 4.788: 96.8045% ( 1) 00:18:47.692 4.788 - 4.812: 96.8200% ( 2) 00:18:47.692 4.812 - 4.836: 96.8355% ( 2) 00:18:47.692 4.836 - 4.859: 96.8820% ( 6) 00:18:47.692 4.859 - 4.883: 96.9286% ( 6) 00:18:47.692 4.883 - 4.907: 96.9751% ( 6) 00:18:47.692 4.907 - 4.930: 97.0449% ( 9) 00:18:47.692 4.930 - 4.954: 97.0837% ( 5) 00:18:47.692 4.954 - 4.978: 97.1613% ( 10) 00:18:47.692 4.978 - 5.001: 97.2000% ( 5) 00:18:47.692 5.001 - 5.025: 97.2698% ( 9) 00:18:47.692 5.025 - 5.049: 97.2776% ( 1) 00:18:47.692 5.049 - 5.073: 97.3552% ( 10) 00:18:47.692 5.073 - 5.096: 97.4250% ( 9) 00:18:47.692 5.096 - 5.120: 97.5335% ( 14) 00:18:47.692 5.120 - 5.144: 97.5723% ( 5) 00:18:47.692 5.144 - 5.167: 97.6189% ( 6) 00:18:47.692 5.167 - 5.191: 97.6266% ( 1) 00:18:47.692 5.191 - 5.215: 97.6576% ( 4) 00:18:47.692 5.215 - 5.239: 97.7042% ( 6) 00:18:47.692 5.239 - 5.262: 97.7430% ( 5) 00:18:47.692 5.262 - 5.286: 97.7585% ( 2) 00:18:47.692 5.286 - 5.310: 97.7817% ( 3) 00:18:47.692 5.310 - 5.333: 97.8205% ( 5) 00:18:47.692 5.333 - 5.357: 97.8283% ( 1) 00:18:47.692 5.357 - 5.381: 97.8360% ( 1) 00:18:47.692 5.381 - 5.404: 97.8515% ( 2) 00:18:47.692 5.404 - 5.428: 97.8826% ( 4) 00:18:47.692 5.428 - 5.452: 97.8903% ( 1) 00:18:47.692 5.452 - 5.476: 97.8981% ( 1) 00:18:47.692 5.476 - 5.499: 97.9136% ( 2) 00:18:47.692 5.547 - 5.570: 97.9214% ( 1) 00:18:47.692 5.570 - 5.594: 97.9291% ( 1) 00:18:47.692 5.618 - 5.641: 97.9446% ( 2) 00:18:47.692 5.689 - 5.713: 97.9524% ( 1) 00:18:47.692 5.760 - 5.784: 97.9679% ( 2) 00:18:47.692 5.807 - 5.831: 97.9912% ( 3) 00:18:47.692 5.831 - 5.855: 97.9989% ( 1) 00:18:47.692 5.902 - 5.926: 98.0067% ( 1) 00:18:47.692 5.926 - 5.950: 98.0144% ( 1) 00:18:47.692 6.210 - 6.258: 98.0455% ( 4) 00:18:47.692 6.258 - 6.305: 98.0532% ( 1) 00:18:47.692 6.400 - 6.447: 98.0610% ( 1) 00:18:47.692 6.495 - 6.542: 98.0687% ( 1) 00:18:47.692 6.542 - 6.590: 98.0765% ( 1) 00:18:47.692 6.590 - 6.637: 98.0842% ( 1) 00:18:47.692 6.779 - 6.827: 98.0920% ( 1) 00:18:47.692 6.827 - 6.874: 98.0997% ( 1) 00:18:47.692 6.874 - 6.921: 98.1153% ( 2) 00:18:47.692 6.921 - 6.969: 98.1385% ( 3) 00:18:47.692 6.969 - 7.016: 98.1463% ( 1) 00:18:47.692 7.016 - 7.064: 98.1618% ( 2) 00:18:47.692 7.064 - 7.111: 98.1928% ( 4) 00:18:47.692 7.159 - 7.206: 98.2006% ( 1) 00:18:47.692 7.206 - 7.253: 98.2083% ( 1) 00:18:47.692 7.301 - 7.348: 98.2238% ( 2) 00:18:47.692 7.348 - 7.396: 98.2394% ( 2) 00:18:47.692 7.443 - 7.490: 98.2549% ( 2) 00:18:47.692 7.490 - 7.538: 98.2626% ( 1) 00:18:47.692 7.585 - 7.633: 98.2704% ( 1) 00:18:47.692 7.633 - 7.680: 98.2859% ( 2) 00:18:47.692 7.680 - 7.727: 98.2936% ( 1) 00:18:47.692 7.727 - 7.775: 98.3169% ( 3) 00:18:47.692 7.822 - 7.870: 98.3247% ( 1) 00:18:47.692 8.012 - 8.059: 98.3324% ( 1) 00:18:47.692 8.107 - 8.154: 98.3402% ( 1) 00:18:47.692 8.154 - 8.201: 98.3479% ( 1) 00:18:47.692 8.201 - 8.249: 98.3557% ( 1) 00:18:47.692 8.296 - 8.344: 98.3790% ( 3) 00:18:47.692 8.344 - 8.391: 98.3867% ( 1) 00:18:47.692 8.439 - 8.486: 98.4022% ( 2) 00:18:47.692 8.533 - 8.581: 98.4100% ( 1) 00:18:47.692 8.628 - 8.676: 98.4177% ( 1) 00:18:47.692 8.723 - 8.770: 98.4255% ( 1) 00:18:47.692 8.770 - 8.818: 98.4410% ( 2) 00:18:47.692 8.865 - 8.913: 98.4565% ( 2) 00:18:47.692 8.913 - 8.960: 98.4643% ( 1) 00:18:47.692 8.960 - 9.007: 98.4720% ( 1) 00:18:47.692 9.007 - 9.055: 98.4798% ( 1) 00:18:47.692 9.055 - 9.102: 98.4876% ( 1) 00:18:47.692 9.102 - 9.150: 98.4953% ( 1) 00:18:47.692 9.150 - 9.197: 98.5031% ( 1) 00:18:47.692 9.197 - 9.244: 98.5186% ( 2) 00:18:47.692 9.244 - 9.292: 98.5263% ( 1) 00:18:47.692 9.292 - 9.339: 98.5418% ( 2) 00:18:47.692 9.339 - 9.387: 98.5496% ( 1) 00:18:47.692 9.387 - 9.434: 98.5729% ( 3) 00:18:47.692 9.434 - 9.481: 98.5806% ( 1) 00:18:47.692 9.481 - 9.529: 98.5961% ( 2) 00:18:47.692 9.529 - 9.576: 98.6194% ( 3) 00:18:47.692 9.576 - 9.624: 98.6272% ( 1) 00:18:47.692 9.671 - 9.719: 98.6349% ( 1) 00:18:47.692 9.719 - 9.766: 98.6504% ( 2) 00:18:47.692 9.766 - 9.813: 98.6659% ( 2) 00:18:47.692 9.813 - 9.861: 98.6737% ( 1) 00:18:47.692 9.861 - 9.908: 98.6892% ( 2) 00:18:47.692 9.908 - 9.956: 98.6970% ( 1) 00:18:47.692 9.956 - 10.003: 98.7047% ( 1) 00:18:47.692 10.003 - 10.050: 98.7125% ( 1) 00:18:47.692 10.098 - 10.145: 98.7202% ( 1) 00:18:47.692 10.193 - 10.240: 98.7357% ( 2) 00:18:47.692 10.430 - 10.477: 98.7435% ( 1) 00:18:47.692 10.572 - 10.619: 98.7513% ( 1) 00:18:47.692 10.619 - 10.667: 98.7668% ( 2) 00:18:47.692 10.667 - 10.714: 98.7823% ( 2) 00:18:47.692 10.761 - 10.809: 98.7900% ( 1) 00:18:47.692 10.904 - 10.951: 98.7978% ( 1) 00:18:47.692 11.046 - 11.093: 98.8211% ( 3) 00:18:47.692 11.093 - 11.141: 98.8288% ( 1) 00:18:47.692 11.141 - 11.188: 98.8443% ( 2) 00:18:47.692 11.188 - 11.236: 98.8521% ( 1) 00:18:47.692 11.330 - 11.378: 98.8598% ( 1) 00:18:47.692 11.425 - 11.473: 98.8676% ( 1) 00:18:47.692 11.473 - 11.520: 98.8754% ( 1) 00:18:47.692 11.520 - 11.567: 98.8909% ( 2) 00:18:47.692 11.615 - 11.662: 98.9064% ( 2) 00:18:47.692 11.662 - 11.710: 98.9141% ( 1) 00:18:47.692 11.899 - 11.947: 98.9297% ( 2) 00:18:47.692 11.947 - 11.994: 98.9374% ( 1) 00:18:47.692 12.231 - 12.326: 98.9452% ( 1) 00:18:47.693 12.326 - 12.421: 98.9607% ( 2) 00:18:47.693 12.421 - 12.516: 98.9684% ( 1) 00:18:47.693 12.516 - 12.610: 98.9762% ( 1) 00:18:47.693 12.895 - 12.990: 98.9839% ( 1) 00:18:47.693 13.084 - 13.179: 98.9917% ( 1) 00:18:47.693 13.369 - 13.464: 99.0150% ( 3) 00:18:47.693 13.653 - 13.748: 99.0305% ( 2) 00:18:47.693 13.748 - 13.843: 99.0460% ( 2) 00:18:47.693 13.843 - 13.938: 99.0538% ( 1) 00:18:47.693 13.938 - 14.033: 99.0615% ( 1) 00:18:47.693 14.127 - 14.222: 99.0693% ( 1) 00:18:47.693 14.222 - 14.317: 99.0925% ( 3) 00:18:47.693 14.601 - 14.696: 99.1003% ( 1) 00:18:47.693 14.886 - 14.981: 99.1080% ( 1) 00:18:47.693 15.170 - 15.265: 99.1158% ( 1) 00:18:47.693 15.360 - 15.455: 99.1236% ( 1) 00:18:47.693 16.972 - 17.067: 99.1313% ( 1) 00:18:47.693 17.161 - 17.256: 99.1546% ( 3) 00:18:47.693 17.256 - 17.351: 99.1856% ( 4) 00:18:47.693 17.351 - 17.446: 99.2011% ( 2) 00:18:47.693 17.446 - 17.541: 99.2399% ( 5) 00:18:47.693 17.541 - 17.636: 99.2709% ( 4) 00:18:47.693 17.636 - 17.730: 99.3175% ( 6) 00:18:47.693 17.730 - 17.825: 99.3407% ( 3) 00:18:47.693 17.825 - 17.920: 99.3795% ( 5) 00:18:47.693 17.920 - 18.015: 99.4416% ( 8) 00:18:47.693 18.015 - 18.110: 99.4803% ( 5) 00:18:47.693 18.110 - 18.204: 99.5114% ( 4) 00:18:47.693 18.204 - 18.299: 99.5657% ( 7) 00:18:47.693 18.299 - 18.394: 99.6122% ( 6) 00:18:47.693 18.394 - 18.489: 99.6355% ( 3) 00:18:47.693 18.489 - 18.584: 99.6820% ( 6) 00:18:47.693 18.584 - 18.679: 99.7053% ( 3) 00:18:47.693 18.679 - 18.773: 99.7285% ( 3) 00:18:47.693 18.773 - 18.868: 99.7596% ( 4) 00:18:47.693 18.868 - 18.963: 99.7828% ( 3) 00:18:47.693 18.963 - 19.058: 99.7983% ( 2) 00:18:47.693 19.247 - 19.342: 99.8139% ( 2) 00:18:47.693 19.437 - 19.532: 99.8216% ( 1) 00:18:47.693 19.532 - 19.627: 99.8294% ( 1) 00:18:47.693 23.135 - 23.230: 99.8371% ( 1) 00:18:47.693 23.419 - 23.514: 99.8449% ( 1) 00:18:47.693 23.704 - 23.799: 99.8526% ( 1) 00:18:47.693 23.893 - 23.988: 99.8604% ( 1) 00:18:47.693 25.221 - 25.410: 99.8759% ( 2) 00:18:47.693 27.117 - 27.307: 99.8837% ( 1) 00:18:47.693 28.255 - 28.444: 99.8992% ( 2) 00:18:47.693 32.996 - 33.185: 99.9069% ( 1) 00:18:47.693 1019.449 - 1025.517: 99.9147% ( 1) 00:18:47.693 3980.705 - 4004.978: 99.9767% ( 8) 00:18:47.693 4004.978 - 4029.250: 100.0000% ( 3) 00:18:47.693 00:18:47.693 Complete histogram 00:18:47.693 ================== 00:18:47.693 Range in us Cumulative Count 00:18:47.693 2.039 - 2.050: 1.2565% ( 162) 00:18:47.693 2.050 - 2.062: 35.1121% ( 4365) 00:18:47.693 2.062 - 2.074: 48.3130% ( 1702) 00:18:47.693 2.074 - 2.086: 50.1435% ( 236) 00:18:47.693 2.086 - 2.098: 55.0686% ( 635) 00:18:47.693 2.098 - 2.110: 57.1628% ( 270) 00:18:47.693 2.110 - 2.121: 63.1738% ( 775) 00:18:47.693 2.121 - 2.133: 78.1277% ( 1928) 00:18:47.693 2.133 - 2.145: 81.0595% ( 378) 00:18:47.693 2.145 - 2.157: 82.8512% ( 231) 00:18:47.693 2.157 - 2.169: 85.2788% ( 313) 00:18:47.693 2.169 - 2.181: 86.0777% ( 103) 00:18:47.693 2.181 - 2.193: 87.6677% ( 205) 00:18:47.693 2.193 - 2.204: 90.3514% ( 346) 00:18:47.693 2.204 - 2.216: 92.6161% ( 292) 00:18:47.693 2.216 - 2.228: 93.5081% ( 115) 00:18:47.693 2.228 - 2.240: 94.0665% ( 72) 00:18:47.693 2.240 - 2.252: 94.2992% ( 30) 00:18:47.693 2.252 - 2.264: 94.5086% ( 27) 00:18:47.693 2.264 - 2.276: 94.7801% ( 35) 00:18:47.693 2.276 - 2.287: 95.2610% ( 62) 00:18:47.693 2.287 - 2.299: 95.4937% ( 30) 00:18:47.693 2.299 - 2.311: 95.5247% ( 4) 00:18:47.693 2.311 - 2.323: 95.5945% ( 9) 00:18:47.693 2.323 - 2.335: 95.6333% ( 5) 00:18:47.693 2.335 - 2.347: 95.6566% ( 3) 00:18:47.693 2.347 - 2.359: 95.7884% ( 17) 00:18:47.693 2.359 - 2.370: 95.9280% ( 18) 00:18:47.693 2.370 - 2.382: 96.1297% ( 26) 00:18:47.693 2.382 - 2.394: 96.3081% ( 23) 00:18:47.693 2.394 - 2.406: 96.5873% ( 36) 00:18:47.693 2.406 - 2.418: 96.8355% ( 32) 00:18:47.693 2.418 - 2.430: 96.9363% ( 13) 00:18:47.693 2.430 - 2.441: 97.1380% ( 26) 00:18:47.693 2.441 - 2.453: 97.2698% ( 17) 00:18:47.693 2.453 - 2.465: 97.4172% ( 19) 00:18:47.693 2.465 - 2.477: 97.5335% ( 15) 00:18:47.693 2.477 - 2.489: 97.6887% ( 20) 00:18:47.693 2.489 - 2.501: 97.7817% ( 12) 00:18:47.693 2.501 - 2.513: 97.8826% ( 13) 00:18:47.693 2.513 - 2.524: 97.9524% ( 9) 00:18:47.693 2.536 - 2.548: 97.9989% ( 6) 00:18:47.693 2.548 - 2.560: 98.0222% ( 3) 00:18:47.693 2.560 - 2.572: 98.0765% ( 7) 00:18:47.693 2.572 - 2.584: 98.0920% ( 2) 00:18:47.693 2.584 - 2.596: 98.1075% ( 2) 00:18:47.693 2.596 - 2.607: 98.1230% ( 2) 00:18:47.693 2.607 - 2.619: 98.1308% ( 1) 00:18:47.693 2.619 - 2.631: 98.1463% ( 2) 00:18:47.693 2.631 - 2.643: 98.1618% ( 2) 00:18:47.693 2.667 - 2.679: 98.1695% ( 1) 00:18:47.693 2.679 - 2.690: 98.1773% ( 1) 00:18:47.693 2.702 - 2.714: 98.1851% ( 1) 00:18:47.693 2.714 - 2.726: 98.1928% ( 1) 00:18:47.693 2.738 - 2.750: 98.2006% ( 1) 00:18:47.693 2.761 - 2.773: 98.2083% ( 1) 00:18:47.693 2.785 - 2.797: 98.2238% ( 2) 00:18:47.693 2.833 - 2.844: 98.2316% ( 1) 00:18:47.693 2.868 - 2.880: 98.2549% ( 3) 00:18:47.693 2.892 - 2.904: 98.2626% ( 1) 00:18:47.693 2.904 - 2.916: 98.2704% ( 1) 00:18:47.693 2.927 - 2.939: 98.2781% ( 1) 00:18:47.693 2.939 - 2.951: 98.2859% ( 1) 00:18:47.693 2.951 - 2.963: 98.2936% ( 1) 00:18:47.693 2.987 - 2.999: 98.3014% ( 1) 00:18:47.693 2.999 - 3.010: 98.3092% ( 1) 00:18:47.693 3.010 - 3.022: 98.3247% ( 2) 00:18:47.693 3.022 - 3.034: 98.3324% ( 1) 00:18:47.693 3.034 - 3.058: 98.3557% ( 3) 00:18:47.693 3.081 - 3.105: 98.3790% ( 3) 00:18:47.693 3.105 - 3.129: 98.4100% ( 4) 00:18:47.693 3.129 - 3.153: 98.4410% ( 4) 00:18:47.693 3.176 - 3.200: 98.4488% ( 1) 00:18:47.693 3.200 - 3.224: 98.4565% ( 1) 00:18:47.693 3.224 - 3.247: 98.4643% ( 1) 00:18:47.693 3.390 - 3.413: 98.4798% ( 2) 00:18:47.693 3.437 - 3.461: 98.4953% ( 2) 00:18:47.693 3.484 - 3.508: 98.5108% ( 2) 00:18:47.693 3.532 - 3.556: 98.5263% ( 2) 00:18:47.693 3.556 - 3.579: 98.5418% ( 2) 00:18:47.693 3.579 - 3.603: 98.5496% ( 1) 00:18:47.693 3.674 - 3.698: 98.5574% ( 1) 00:18:47.693 3.698 - 3.721: 98.5806% ( 3) 00:18:47.693 3.745 - 3.769: 98.5961% ( 2) 00:18:47.693 3.769 - 3.793: 98.6039% ( 1) 00:18:47.693 3.816 - 3.840: 98.6116% ( 1) 00:18:47.693 3.911 - 3.935: 98.6194% ( 1) 00:18:47.693 3.959 - 3.982: 98.6272% ( 1) 00:18:47.693 3.982 - 4.006: 98.6349% ( 1) 00:18:47.693 4.006 - 4.030: 98.6582% ( 3) 00:18:47.693 4.077 - 4.101: 98.6659% ( 1) 00:18:47.693 4.101 - 4.124: 98.6815% ( 2) 00:18:47.693 4.172 - 4.196: 98.6892% ( 1) 00:18:47.693 4.551 - 4.575: 98.6970% ( 1) 00:18:47.693 5.523 - 5.547: 98.7047% ( 1) 00:18:47.693 6.258 - 6.305: 98.7125% ( 1) 00:18:47.693 6.400 - 6.447: 98.7202% ( 1) 00:18:47.693 6.447 - 6.495: 98.7357% ( 2) 00:18:47.693 6.590 - 6.637: 98.7435% ( 1) 00:18:47.693 6.637 - 6.684: 98.7590% ( 2) 00:18:47.693 6.921 - 6.969: 98.7668% ( 1) 00:18:47.693 6.969 - 7.016: 98.7745% ( 1) 00:18:47.693 7.111 - 7.159: 98.7823% ( 1) 00:18:47.693 7.301 - 7.348: 98.7900% ( 1) 00:18:47.693 7.348 - 7.396: 98.7978% ( 1) 00:18:47.693 7.396 - 7.443: 98.8056% ( 1) 00:18:47.693 7.633 - 7.680: 98.8211% ( 2) 00:18:47.693 8.249 - 8.296: 98.8288% ( 1) 00:18:47.693 8.770 - 8.818: 98.8366% ( 1) 00:18:47.693 11.141 - 11.188: 98.8443% ( 1) 00:18:47.693 11.473 - 11.520: 98.8521% ( 1) 00:18:47.693 11.757 - 11.804: 98.8598% ( 1) 00:18:47.693 15.360 - 15.455: 98.8676% ( 1) 00:18:47.693 15.455 - 15.550: 98.8754% ( 1) 00:18:47.693 15.644 - 15.739: 98.8986% ( 3) 00:18:47.693 15.739 - 15.834: 98.9141% ( 2) 00:18:47.693 15.834 - 15.929: 98.9219%[2024-10-13 01:29:33.208311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:47.693 ( 1) 00:18:47.693 15.929 - 16.024: 98.9452% ( 3) 00:18:47.693 16.119 - 16.213: 98.9607% ( 2) 00:18:47.693 16.213 - 16.308: 98.9762% ( 2) 00:18:47.693 16.308 - 16.403: 99.0072% ( 4) 00:18:47.693 16.403 - 16.498: 99.0382% ( 4) 00:18:47.693 16.498 - 16.593: 99.0538% ( 2) 00:18:47.693 16.593 - 16.687: 99.0848% ( 4) 00:18:47.693 16.687 - 16.782: 99.1236% ( 5) 00:18:47.694 16.782 - 16.877: 99.1623% ( 5) 00:18:47.694 16.877 - 16.972: 99.2399% ( 10) 00:18:47.694 16.972 - 17.067: 99.2864% ( 6) 00:18:47.694 17.351 - 17.446: 99.2942% ( 1) 00:18:47.694 17.541 - 17.636: 99.3019% ( 1) 00:18:47.694 17.730 - 17.825: 99.3097% ( 1) 00:18:47.694 18.110 - 18.204: 99.3330% ( 3) 00:18:47.694 18.204 - 18.299: 99.3407% ( 1) 00:18:47.694 18.394 - 18.489: 99.3640% ( 3) 00:18:47.694 18.679 - 18.773: 99.3718% ( 1) 00:18:47.694 18.868 - 18.963: 99.3795% ( 1) 00:18:47.694 19.627 - 19.721: 99.3873% ( 1) 00:18:47.694 24.652 - 24.841: 99.3950% ( 1) 00:18:47.694 27.496 - 27.686: 99.4028% ( 1) 00:18:47.694 3980.705 - 4004.978: 99.8371% ( 56) 00:18:47.694 4004.978 - 4029.250: 99.9922% ( 20) 00:18:47.694 4053.523 - 4077.796: 100.0000% ( 1) 00:18:47.694 00:18:47.694 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:47.694 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:47.694 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:47.694 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:47.694 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:48.259 [ 00:18:48.259 { 00:18:48.259 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:48.259 "subtype": "Discovery", 00:18:48.259 "listen_addresses": [], 00:18:48.259 "allow_any_host": true, 00:18:48.259 "hosts": [] 00:18:48.259 }, 00:18:48.259 { 00:18:48.259 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:48.259 "subtype": "NVMe", 00:18:48.259 "listen_addresses": [ 00:18:48.259 { 00:18:48.259 "trtype": "VFIOUSER", 00:18:48.259 "adrfam": "IPv4", 00:18:48.259 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:48.259 "trsvcid": "0" 00:18:48.259 } 00:18:48.259 ], 00:18:48.259 "allow_any_host": true, 00:18:48.259 "hosts": [], 00:18:48.259 "serial_number": "SPDK1", 00:18:48.259 "model_number": "SPDK bdev Controller", 00:18:48.259 "max_namespaces": 32, 00:18:48.259 "min_cntlid": 1, 00:18:48.259 "max_cntlid": 65519, 00:18:48.259 "namespaces": [ 00:18:48.259 { 00:18:48.259 "nsid": 1, 00:18:48.259 "bdev_name": "Malloc1", 00:18:48.259 "name": "Malloc1", 00:18:48.259 "nguid": "B62CEC0E16A34DAFAF958B0C29BC506E", 00:18:48.260 "uuid": "b62cec0e-16a3-4daf-af95-8b0c29bc506e" 00:18:48.260 }, 00:18:48.260 { 00:18:48.260 "nsid": 2, 00:18:48.260 "bdev_name": "Malloc3", 00:18:48.260 "name": "Malloc3", 00:18:48.260 "nguid": "379AB7E24EBD42F08AEAED55BD8BE65C", 00:18:48.260 "uuid": "379ab7e2-4ebd-42f0-8aea-ed55bd8be65c" 00:18:48.260 } 00:18:48.260 ] 00:18:48.260 }, 00:18:48.260 { 00:18:48.260 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:48.260 "subtype": "NVMe", 00:18:48.260 "listen_addresses": [ 00:18:48.260 { 00:18:48.260 "trtype": "VFIOUSER", 00:18:48.260 "adrfam": "IPv4", 00:18:48.260 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:48.260 "trsvcid": "0" 00:18:48.260 } 00:18:48.260 ], 00:18:48.260 "allow_any_host": true, 00:18:48.260 "hosts": [], 00:18:48.260 "serial_number": "SPDK2", 00:18:48.260 "model_number": "SPDK bdev Controller", 00:18:48.260 "max_namespaces": 32, 00:18:48.260 "min_cntlid": 1, 00:18:48.260 "max_cntlid": 65519, 00:18:48.260 "namespaces": [ 00:18:48.260 { 00:18:48.260 "nsid": 1, 00:18:48.260 "bdev_name": "Malloc2", 00:18:48.260 "name": "Malloc2", 00:18:48.260 "nguid": "EB5E63C8EB934AF49721969D0501741D", 00:18:48.260 "uuid": "eb5e63c8-eb93-4af4-9721-969d0501741d" 00:18:48.260 } 00:18:48.260 ] 00:18:48.260 } 00:18:48.260 ] 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1595910 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:48.260 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:48.260 [2024-10-13 01:29:33.723016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:48.518 Malloc4 00:18:48.518 01:29:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:48.776 [2024-10-13 01:29:34.155292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:48.776 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:48.776 Asynchronous Event Request test 00:18:48.776 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:48.776 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:48.776 Registering asynchronous event callbacks... 00:18:48.776 Starting namespace attribute notice tests for all controllers... 00:18:48.776 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:48.776 aer_cb - Changed Namespace 00:18:48.776 Cleaning up... 00:18:49.034 [ 00:18:49.034 { 00:18:49.034 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:49.034 "subtype": "Discovery", 00:18:49.034 "listen_addresses": [], 00:18:49.034 "allow_any_host": true, 00:18:49.034 "hosts": [] 00:18:49.034 }, 00:18:49.034 { 00:18:49.034 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:49.034 "subtype": "NVMe", 00:18:49.034 "listen_addresses": [ 00:18:49.034 { 00:18:49.034 "trtype": "VFIOUSER", 00:18:49.034 "adrfam": "IPv4", 00:18:49.034 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:49.034 "trsvcid": "0" 00:18:49.034 } 00:18:49.034 ], 00:18:49.034 "allow_any_host": true, 00:18:49.034 "hosts": [], 00:18:49.034 "serial_number": "SPDK1", 00:18:49.034 "model_number": "SPDK bdev Controller", 00:18:49.034 "max_namespaces": 32, 00:18:49.035 "min_cntlid": 1, 00:18:49.035 "max_cntlid": 65519, 00:18:49.035 "namespaces": [ 00:18:49.035 { 00:18:49.035 "nsid": 1, 00:18:49.035 "bdev_name": "Malloc1", 00:18:49.035 "name": "Malloc1", 00:18:49.035 "nguid": "B62CEC0E16A34DAFAF958B0C29BC506E", 00:18:49.035 "uuid": "b62cec0e-16a3-4daf-af95-8b0c29bc506e" 00:18:49.035 }, 00:18:49.035 { 00:18:49.035 "nsid": 2, 00:18:49.035 "bdev_name": "Malloc3", 00:18:49.035 "name": "Malloc3", 00:18:49.035 "nguid": "379AB7E24EBD42F08AEAED55BD8BE65C", 00:18:49.035 "uuid": "379ab7e2-4ebd-42f0-8aea-ed55bd8be65c" 00:18:49.035 } 00:18:49.035 ] 00:18:49.035 }, 00:18:49.035 { 00:18:49.035 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:49.035 "subtype": "NVMe", 00:18:49.035 "listen_addresses": [ 00:18:49.035 { 00:18:49.035 "trtype": "VFIOUSER", 00:18:49.035 "adrfam": "IPv4", 00:18:49.035 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:49.035 "trsvcid": "0" 00:18:49.035 } 00:18:49.035 ], 00:18:49.035 "allow_any_host": true, 00:18:49.035 "hosts": [], 00:18:49.035 "serial_number": "SPDK2", 00:18:49.035 "model_number": "SPDK bdev Controller", 00:18:49.035 "max_namespaces": 32, 00:18:49.035 "min_cntlid": 1, 00:18:49.035 "max_cntlid": 65519, 00:18:49.035 "namespaces": [ 00:18:49.035 { 00:18:49.035 "nsid": 1, 00:18:49.035 "bdev_name": "Malloc2", 00:18:49.035 "name": "Malloc2", 00:18:49.035 "nguid": "EB5E63C8EB934AF49721969D0501741D", 00:18:49.035 "uuid": "eb5e63c8-eb93-4af4-9721-969d0501741d" 00:18:49.035 }, 00:18:49.035 { 00:18:49.035 "nsid": 2, 00:18:49.035 "bdev_name": "Malloc4", 00:18:49.035 "name": "Malloc4", 00:18:49.035 "nguid": "2DC5D09203A542318B206628EE6C7A80", 00:18:49.035 "uuid": "2dc5d092-03a5-4231-8b20-6628ee6c7a80" 00:18:49.035 } 00:18:49.035 ] 00:18:49.035 } 00:18:49.035 ] 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1595910 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1590294 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1590294 ']' 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1590294 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1590294 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1590294' 00:18:49.035 killing process with pid 1590294 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1590294 00:18:49.035 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1590294 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1596089 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1596089' 00:18:49.294 Process pid: 1596089 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1596089 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1596089 ']' 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.294 01:29:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:49.294 [2024-10-13 01:29:34.842212] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:49.294 [2024-10-13 01:29:34.843297] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:18:49.294 [2024-10-13 01:29:34.843358] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.553 [2024-10-13 01:29:34.907348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.553 [2024-10-13 01:29:34.956244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.553 [2024-10-13 01:29:34.956309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.553 [2024-10-13 01:29:34.956326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.553 [2024-10-13 01:29:34.956339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.553 [2024-10-13 01:29:34.956351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.553 [2024-10-13 01:29:34.957977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.553 [2024-10-13 01:29:34.958047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.553 [2024-10-13 01:29:34.958146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.553 [2024-10-13 01:29:34.958149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.553 [2024-10-13 01:29:35.046610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:49.553 [2024-10-13 01:29:35.046772] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:49.553 [2024-10-13 01:29:35.047068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:49.553 [2024-10-13 01:29:35.047650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:49.553 [2024-10-13 01:29:35.047926] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:49.553 01:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:49.553 01:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:18:49.553 01:29:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:50.931 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:50.931 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:50.931 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:50.931 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:50.931 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:50.931 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:51.190 Malloc1 00:18:51.190 01:29:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:51.757 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:52.015 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:52.273 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:52.273 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:52.273 01:29:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:52.532 Malloc2 00:18:52.532 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:52.788 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:53.046 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:53.303 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:53.303 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1596089 00:18:53.303 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1596089 ']' 00:18:53.303 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1596089 00:18:53.303 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:18:53.303 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.304 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596089 00:18:53.562 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:53.562 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:53.562 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596089' 00:18:53.562 killing process with pid 1596089 00:18:53.562 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1596089 00:18:53.562 01:29:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1596089 00:18:53.820 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:53.820 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:53.820 00:18:53.820 real 0m54.085s 00:18:53.820 user 3m28.886s 00:18:53.820 sys 0m4.061s 00:18:53.820 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:53.821 ************************************ 00:18:53.821 END TEST nvmf_vfio_user 00:18:53.821 ************************************ 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.821 ************************************ 00:18:53.821 START TEST nvmf_vfio_user_nvme_compliance 00:18:53.821 ************************************ 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:53.821 * Looking for test storage... 00:18:53.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:53.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.821 --rc genhtml_branch_coverage=1 00:18:53.821 --rc genhtml_function_coverage=1 00:18:53.821 --rc genhtml_legend=1 00:18:53.821 --rc geninfo_all_blocks=1 00:18:53.821 --rc geninfo_unexecuted_blocks=1 00:18:53.821 00:18:53.821 ' 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:53.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.821 --rc genhtml_branch_coverage=1 00:18:53.821 --rc genhtml_function_coverage=1 00:18:53.821 --rc genhtml_legend=1 00:18:53.821 --rc geninfo_all_blocks=1 00:18:53.821 --rc geninfo_unexecuted_blocks=1 00:18:53.821 00:18:53.821 ' 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:53.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.821 --rc genhtml_branch_coverage=1 00:18:53.821 --rc genhtml_function_coverage=1 00:18:53.821 --rc genhtml_legend=1 00:18:53.821 --rc geninfo_all_blocks=1 00:18:53.821 --rc geninfo_unexecuted_blocks=1 00:18:53.821 00:18:53.821 ' 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:53.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:53.821 --rc genhtml_branch_coverage=1 00:18:53.821 --rc genhtml_function_coverage=1 00:18:53.821 --rc genhtml_legend=1 00:18:53.821 --rc geninfo_all_blocks=1 00:18:53.821 --rc geninfo_unexecuted_blocks=1 00:18:53.821 00:18:53.821 ' 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.821 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:53.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1596776 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1596776' 00:18:53.822 Process pid: 1596776 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1596776 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1596776 ']' 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:53.822 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:54.080 [2024-10-13 01:29:39.435896] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:18:54.080 [2024-10-13 01:29:39.435992] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.080 [2024-10-13 01:29:39.492108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:54.080 [2024-10-13 01:29:39.537106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.080 [2024-10-13 01:29:39.537164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.080 [2024-10-13 01:29:39.537192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.080 [2024-10-13 01:29:39.537203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.080 [2024-10-13 01:29:39.537213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.080 [2024-10-13 01:29:39.538562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.080 [2024-10-13 01:29:39.538632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.080 [2024-10-13 01:29:39.538629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.338 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:54.338 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:18:54.338 01:29:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:55.271 malloc0 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.271 01:29:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:55.529 00:18:55.529 00:18:55.529 CUnit - A unit testing framework for C - Version 2.1-3 00:18:55.529 http://cunit.sourceforge.net/ 00:18:55.529 00:18:55.529 00:18:55.529 Suite: nvme_compliance 00:18:55.529 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-13 01:29:40.904046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.529 [2024-10-13 01:29:40.905585] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:55.529 [2024-10-13 01:29:40.905612] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:55.529 [2024-10-13 01:29:40.905624] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:55.529 [2024-10-13 01:29:40.907058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.529 passed 00:18:55.529 Test: admin_identify_ctrlr_verify_fused ...[2024-10-13 01:29:40.992660] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.529 [2024-10-13 01:29:40.995677] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.529 passed 00:18:55.529 Test: admin_identify_ns ...[2024-10-13 01:29:41.082985] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.786 [2024-10-13 01:29:41.142504] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:55.786 [2024-10-13 01:29:41.150498] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:55.786 [2024-10-13 01:29:41.171642] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.786 passed 00:18:55.786 Test: admin_get_features_mandatory_features ...[2024-10-13 01:29:41.255275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.787 [2024-10-13 01:29:41.258297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:55.787 passed 00:18:55.787 Test: admin_get_features_optional_features ...[2024-10-13 01:29:41.339797] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:55.787 [2024-10-13 01:29:41.342835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.044 passed 00:18:56.044 Test: admin_set_features_number_of_queues ...[2024-10-13 01:29:41.428118] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.044 [2024-10-13 01:29:41.533595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.044 passed 00:18:56.044 Test: admin_get_log_page_mandatory_logs ...[2024-10-13 01:29:41.616592] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.044 [2024-10-13 01:29:41.619612] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.301 passed 00:18:56.301 Test: admin_get_log_page_with_lpo ...[2024-10-13 01:29:41.703009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.301 [2024-10-13 01:29:41.770490] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:56.301 [2024-10-13 01:29:41.783569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.301 passed 00:18:56.301 Test: fabric_property_get ...[2024-10-13 01:29:41.865768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.301 [2024-10-13 01:29:41.867050] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:56.301 [2024-10-13 01:29:41.868794] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.558 passed 00:18:56.558 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-13 01:29:41.954352] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.558 [2024-10-13 01:29:41.955676] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:56.558 [2024-10-13 01:29:41.957370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.558 passed 00:18:56.558 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-13 01:29:42.040609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.558 [2024-10-13 01:29:42.124492] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:56.816 [2024-10-13 01:29:42.140494] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:56.816 [2024-10-13 01:29:42.145622] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.816 passed 00:18:56.816 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-13 01:29:42.230329] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.816 [2024-10-13 01:29:42.231644] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:56.816 [2024-10-13 01:29:42.233351] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:56.816 passed 00:18:56.816 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-13 01:29:42.317532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:56.816 [2024-10-13 01:29:42.391484] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:57.073 [2024-10-13 01:29:42.415482] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:57.073 [2024-10-13 01:29:42.420586] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:57.073 passed 00:18:57.073 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-13 01:29:42.506790] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:57.073 [2024-10-13 01:29:42.508099] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:57.073 [2024-10-13 01:29:42.508154] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:57.073 [2024-10-13 01:29:42.509815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:57.073 passed 00:18:57.073 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-13 01:29:42.592000] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:57.331 [2024-10-13 01:29:42.683481] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:57.331 [2024-10-13 01:29:42.691481] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:57.331 [2024-10-13 01:29:42.699481] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:57.331 [2024-10-13 01:29:42.707479] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:57.331 [2024-10-13 01:29:42.736600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:57.331 passed 00:18:57.331 Test: admin_create_io_sq_verify_pc ...[2024-10-13 01:29:42.820428] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:57.331 [2024-10-13 01:29:42.837494] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:57.331 [2024-10-13 01:29:42.854851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:57.331 passed 00:18:57.588 Test: admin_create_io_qp_max_qps ...[2024-10-13 01:29:42.937404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:58.519 [2024-10-13 01:29:44.024502] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:59.084 [2024-10-13 01:29:44.405388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.084 passed 00:18:59.084 Test: admin_create_io_sq_shared_cq ...[2024-10-13 01:29:44.488728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:59.084 [2024-10-13 01:29:44.620481] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:59.084 [2024-10-13 01:29:44.657575] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:59.342 passed 00:18:59.342 00:18:59.342 Run Summary: Type Total Ran Passed Failed Inactive 00:18:59.342 suites 1 1 n/a 0 0 00:18:59.342 tests 18 18 18 0 0 00:18:59.342 asserts 360 360 360 0 n/a 00:18:59.342 00:18:59.342 Elapsed time = 1.556 seconds 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1596776 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1596776 ']' 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1596776 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596776 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596776' 00:18:59.342 killing process with pid 1596776 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1596776 00:18:59.342 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1596776 00:18:59.601 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:59.601 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:59.601 00:18:59.601 real 0m5.723s 00:18:59.601 user 0m16.094s 00:18:59.601 sys 0m0.547s 00:18:59.601 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:59.601 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:59.601 ************************************ 00:18:59.601 END TEST nvmf_vfio_user_nvme_compliance 00:18:59.601 ************************************ 00:18:59.601 01:29:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:59.601 01:29:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:59.601 01:29:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:59.601 01:29:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.601 ************************************ 00:18:59.601 START TEST nvmf_vfio_user_fuzz 00:18:59.601 ************************************ 00:18:59.601 01:29:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:59.601 * Looking for test storage... 00:18:59.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.601 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:59.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.602 --rc genhtml_branch_coverage=1 00:18:59.602 --rc genhtml_function_coverage=1 00:18:59.602 --rc genhtml_legend=1 00:18:59.602 --rc geninfo_all_blocks=1 00:18:59.602 --rc geninfo_unexecuted_blocks=1 00:18:59.602 00:18:59.602 ' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:59.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.602 --rc genhtml_branch_coverage=1 00:18:59.602 --rc genhtml_function_coverage=1 00:18:59.602 --rc genhtml_legend=1 00:18:59.602 --rc geninfo_all_blocks=1 00:18:59.602 --rc geninfo_unexecuted_blocks=1 00:18:59.602 00:18:59.602 ' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:59.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.602 --rc genhtml_branch_coverage=1 00:18:59.602 --rc genhtml_function_coverage=1 00:18:59.602 --rc genhtml_legend=1 00:18:59.602 --rc geninfo_all_blocks=1 00:18:59.602 --rc geninfo_unexecuted_blocks=1 00:18:59.602 00:18:59.602 ' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:59.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.602 --rc genhtml_branch_coverage=1 00:18:59.602 --rc genhtml_function_coverage=1 00:18:59.602 --rc genhtml_legend=1 00:18:59.602 --rc geninfo_all_blocks=1 00:18:59.602 --rc geninfo_unexecuted_blocks=1 00:18:59.602 00:18:59.602 ' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1597501 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1597501' 00:18:59.602 Process pid: 1597501 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1597501 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1597501 ']' 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:59.602 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:59.861 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.861 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:18:59.861 01:29:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:01.234 malloc0 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:01.234 01:29:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:33.319 Fuzzing completed. Shutting down the fuzz application 00:19:33.319 00:19:33.319 Dumping successful admin opcodes: 00:19:33.319 8, 9, 10, 24, 00:19:33.319 Dumping successful io opcodes: 00:19:33.319 0, 00:19:33.319 NS: 0x20000081ef00 I/O qp, Total commands completed: 554550, total successful commands: 2132, random_seed: 2341073856 00:19:33.319 NS: 0x20000081ef00 admin qp, Total commands completed: 117058, total successful commands: 957, random_seed: 1984046848 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1597501 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1597501 ']' 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1597501 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597501 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597501' 00:19:33.319 killing process with pid 1597501 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1597501 00:19:33.319 01:30:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1597501 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:33.319 00:19:33.319 real 0m32.142s 00:19:33.319 user 0m31.445s 00:19:33.319 sys 0m28.164s 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:33.319 ************************************ 00:19:33.319 END TEST nvmf_vfio_user_fuzz 00:19:33.319 ************************************ 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.319 ************************************ 00:19:33.319 START TEST nvmf_auth_target 00:19:33.319 ************************************ 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:33.319 * Looking for test storage... 00:19:33.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.319 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:33.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.320 --rc genhtml_branch_coverage=1 00:19:33.320 --rc genhtml_function_coverage=1 00:19:33.320 --rc genhtml_legend=1 00:19:33.320 --rc geninfo_all_blocks=1 00:19:33.320 --rc geninfo_unexecuted_blocks=1 00:19:33.320 00:19:33.320 ' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:33.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.320 --rc genhtml_branch_coverage=1 00:19:33.320 --rc genhtml_function_coverage=1 00:19:33.320 --rc genhtml_legend=1 00:19:33.320 --rc geninfo_all_blocks=1 00:19:33.320 --rc geninfo_unexecuted_blocks=1 00:19:33.320 00:19:33.320 ' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:33.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.320 --rc genhtml_branch_coverage=1 00:19:33.320 --rc genhtml_function_coverage=1 00:19:33.320 --rc genhtml_legend=1 00:19:33.320 --rc geninfo_all_blocks=1 00:19:33.320 --rc geninfo_unexecuted_blocks=1 00:19:33.320 00:19:33.320 ' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:33.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.320 --rc genhtml_branch_coverage=1 00:19:33.320 --rc genhtml_function_coverage=1 00:19:33.320 --rc genhtml_legend=1 00:19:33.320 --rc geninfo_all_blocks=1 00:19:33.320 --rc geninfo_unexecuted_blocks=1 00:19:33.320 00:19:33.320 ' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:33.320 01:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:33.887 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:33.887 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:33.887 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:33.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:33.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:33.888 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.146 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.146 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:34.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:19:34.147 00:19:34.147 --- 10.0.0.2 ping statistics --- 00:19:34.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.147 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:19:34.147 00:19:34.147 --- 10.0.0.1 ping statistics --- 00:19:34.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.147 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1603441 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1603441 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1603441 ']' 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.147 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1603581 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=fcaa379c5a4a62732921d5701ef05e3813660b9944f56b8c 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.qh6 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key fcaa379c5a4a62732921d5701ef05e3813660b9944f56b8c 0 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 fcaa379c5a4a62732921d5701ef05e3813660b9944f56b8c 0 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:34.405 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=fcaa379c5a4a62732921d5701ef05e3813660b9944f56b8c 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.qh6 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.qh6 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.qh6 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=88477a30535cb971c8dd1fa346956ba7037420f95152bcb5873b8777bb0fa8dc 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.bwP 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 88477a30535cb971c8dd1fa346956ba7037420f95152bcb5873b8777bb0fa8dc 3 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 88477a30535cb971c8dd1fa346956ba7037420f95152bcb5873b8777bb0fa8dc 3 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=88477a30535cb971c8dd1fa346956ba7037420f95152bcb5873b8777bb0fa8dc 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.bwP 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.bwP 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.bwP 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=2872a45f77a909ac51eeb79e4bbf8398 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.yL6 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 2872a45f77a909ac51eeb79e4bbf8398 1 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 2872a45f77a909ac51eeb79e4bbf8398 1 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=2872a45f77a909ac51eeb79e4bbf8398 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.yL6 00:19:34.406 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.yL6 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.yL6 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4290550f9b671050b8e1a1581c54721a1315a37d1c82668a 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.IAk 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4290550f9b671050b8e1a1581c54721a1315a37d1c82668a 2 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4290550f9b671050b8e1a1581c54721a1315a37d1c82668a 2 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4290550f9b671050b8e1a1581c54721a1315a37d1c82668a 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:34.664 01:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.IAk 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.IAk 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.IAk 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0c52833401f34cb3080302cde7e2584858b9b8848c73d926 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Uyc 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0c52833401f34cb3080302cde7e2584858b9b8848c73d926 2 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0c52833401f34cb3080302cde7e2584858b9b8848c73d926 2 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0c52833401f34cb3080302cde7e2584858b9b8848c73d926 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:34.664 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Uyc 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Uyc 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Uyc 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=2fb097f1241ccdf8e26d3f6fc1ec5aa7 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.qJK 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 2fb097f1241ccdf8e26d3f6fc1ec5aa7 1 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 2fb097f1241ccdf8e26d3f6fc1ec5aa7 1 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=2fb097f1241ccdf8e26d3f6fc1ec5aa7 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.qJK 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.qJK 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.qJK 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=1365aa26b73c70e797a74d39fadfde4bd1f9f0618eaa9755be63320c9944a4f6 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.smF 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 1365aa26b73c70e797a74d39fadfde4bd1f9f0618eaa9755be63320c9944a4f6 3 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 1365aa26b73c70e797a74d39fadfde4bd1f9f0618eaa9755be63320c9944a4f6 3 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=1365aa26b73c70e797a74d39fadfde4bd1f9f0618eaa9755be63320c9944a4f6 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.smF 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.smF 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.smF 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1603441 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1603441 ']' 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.665 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.923 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.923 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:34.923 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1603581 /var/tmp/host.sock 00:19:34.923 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1603581 ']' 00:19:34.923 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:19:34.923 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.923 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:34.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:34.923 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.923 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.181 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.181 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:35.181 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:35.181 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.181 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.439 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.439 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:35.439 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qh6 00:19:35.439 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.439 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.439 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.439 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.qh6 00:19:35.439 01:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.qh6 00:19:35.697 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.bwP ]] 00:19:35.697 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bwP 00:19:35.697 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.697 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.697 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.697 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bwP 00:19:35.697 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bwP 00:19:35.954 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:35.954 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.yL6 00:19:35.954 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.954 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.954 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.954 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.yL6 00:19:35.954 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.yL6 00:19:36.211 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.IAk ]] 00:19:36.211 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IAk 00:19:36.211 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.211 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.211 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.211 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IAk 00:19:36.211 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IAk 00:19:36.484 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:36.484 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Uyc 00:19:36.484 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.484 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.484 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.484 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Uyc 00:19:36.484 01:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Uyc 00:19:36.741 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.qJK ]] 00:19:36.742 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qJK 00:19:36.742 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.742 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.742 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.742 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qJK 00:19:36.742 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qJK 00:19:36.999 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:36.999 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.smF 00:19:36.999 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.999 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.999 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.999 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.smF 00:19:36.999 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.smF 00:19:37.257 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:37.257 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:37.257 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.257 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.257 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.257 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:37.514 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.515 01:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.772 00:19:37.772 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.772 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.772 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.030 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.030 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.030 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.030 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.294 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.294 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.294 { 00:19:38.294 "cntlid": 1, 00:19:38.294 "qid": 0, 00:19:38.294 "state": "enabled", 00:19:38.294 "thread": "nvmf_tgt_poll_group_000", 00:19:38.294 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:38.294 "listen_address": { 00:19:38.294 "trtype": "TCP", 00:19:38.294 "adrfam": "IPv4", 00:19:38.294 "traddr": "10.0.0.2", 00:19:38.294 "trsvcid": "4420" 00:19:38.294 }, 00:19:38.294 "peer_address": { 00:19:38.294 "trtype": "TCP", 00:19:38.294 "adrfam": "IPv4", 00:19:38.294 "traddr": "10.0.0.1", 00:19:38.294 "trsvcid": "40304" 00:19:38.295 }, 00:19:38.295 "auth": { 00:19:38.295 "state": "completed", 00:19:38.295 "digest": "sha256", 00:19:38.295 "dhgroup": "null" 00:19:38.295 } 00:19:38.295 } 00:19:38.295 ]' 00:19:38.295 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.295 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.295 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.295 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:38.295 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.295 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.295 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.295 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.586 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:19:38.586 01:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:19:39.540 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.540 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.540 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.540 01:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.540 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.540 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.540 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.540 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.798 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.056 00:19:40.056 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.056 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.056 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.314 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.314 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.314 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.314 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.572 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.572 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.572 { 00:19:40.572 "cntlid": 3, 00:19:40.572 "qid": 0, 00:19:40.572 "state": "enabled", 00:19:40.572 "thread": "nvmf_tgt_poll_group_000", 00:19:40.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:40.572 "listen_address": { 00:19:40.572 "trtype": "TCP", 00:19:40.572 "adrfam": "IPv4", 00:19:40.572 "traddr": "10.0.0.2", 00:19:40.572 "trsvcid": "4420" 00:19:40.572 }, 00:19:40.572 "peer_address": { 00:19:40.572 "trtype": "TCP", 00:19:40.572 "adrfam": "IPv4", 00:19:40.572 "traddr": "10.0.0.1", 00:19:40.572 "trsvcid": "40336" 00:19:40.572 }, 00:19:40.572 "auth": { 00:19:40.572 "state": "completed", 00:19:40.572 "digest": "sha256", 00:19:40.572 "dhgroup": "null" 00:19:40.572 } 00:19:40.572 } 00:19:40.572 ]' 00:19:40.572 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.572 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.572 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.572 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:40.572 01:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.572 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.572 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.572 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.830 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:19:40.830 01:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:19:41.764 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.764 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.764 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.764 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.764 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.764 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.764 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:41.764 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.022 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.280 00:19:42.280 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.280 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.280 01:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.538 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.538 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.538 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.538 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.538 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.538 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.538 { 00:19:42.538 "cntlid": 5, 00:19:42.538 "qid": 0, 00:19:42.538 "state": "enabled", 00:19:42.538 "thread": "nvmf_tgt_poll_group_000", 00:19:42.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:42.538 "listen_address": { 00:19:42.538 "trtype": "TCP", 00:19:42.538 "adrfam": "IPv4", 00:19:42.538 "traddr": "10.0.0.2", 00:19:42.538 "trsvcid": "4420" 00:19:42.538 }, 00:19:42.538 "peer_address": { 00:19:42.538 "trtype": "TCP", 00:19:42.538 "adrfam": "IPv4", 00:19:42.538 "traddr": "10.0.0.1", 00:19:42.538 "trsvcid": "40356" 00:19:42.538 }, 00:19:42.538 "auth": { 00:19:42.538 "state": "completed", 00:19:42.538 "digest": "sha256", 00:19:42.538 "dhgroup": "null" 00:19:42.538 } 00:19:42.538 } 00:19:42.538 ]' 00:19:42.538 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.795 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.795 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.795 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:42.795 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.795 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.795 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.795 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.053 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:19:43.053 01:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:19:43.985 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.985 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.985 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.985 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.985 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.985 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.985 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:43.985 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.243 01:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.809 00:19:44.809 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.809 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.809 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.067 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.067 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.067 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.067 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.067 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.067 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.067 { 00:19:45.067 "cntlid": 7, 00:19:45.067 "qid": 0, 00:19:45.067 "state": "enabled", 00:19:45.067 "thread": "nvmf_tgt_poll_group_000", 00:19:45.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:45.067 "listen_address": { 00:19:45.067 "trtype": "TCP", 00:19:45.067 "adrfam": "IPv4", 00:19:45.067 "traddr": "10.0.0.2", 00:19:45.067 "trsvcid": "4420" 00:19:45.067 }, 00:19:45.067 "peer_address": { 00:19:45.067 "trtype": "TCP", 00:19:45.067 "adrfam": "IPv4", 00:19:45.067 "traddr": "10.0.0.1", 00:19:45.067 "trsvcid": "57002" 00:19:45.067 }, 00:19:45.067 "auth": { 00:19:45.067 "state": "completed", 00:19:45.067 "digest": "sha256", 00:19:45.067 "dhgroup": "null" 00:19:45.067 } 00:19:45.067 } 00:19:45.067 ]' 00:19:45.067 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.067 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.067 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.068 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:45.068 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.068 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.068 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.068 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.326 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:19:45.326 01:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:19:46.259 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.259 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.259 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.259 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.259 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.259 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.259 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.259 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.259 01:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.517 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.083 00:19:47.083 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.083 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.083 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.341 { 00:19:47.341 "cntlid": 9, 00:19:47.341 "qid": 0, 00:19:47.341 "state": "enabled", 00:19:47.341 "thread": "nvmf_tgt_poll_group_000", 00:19:47.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:47.341 "listen_address": { 00:19:47.341 "trtype": "TCP", 00:19:47.341 "adrfam": "IPv4", 00:19:47.341 "traddr": "10.0.0.2", 00:19:47.341 "trsvcid": "4420" 00:19:47.341 }, 00:19:47.341 "peer_address": { 00:19:47.341 "trtype": "TCP", 00:19:47.341 "adrfam": "IPv4", 00:19:47.341 "traddr": "10.0.0.1", 00:19:47.341 "trsvcid": "57026" 00:19:47.341 }, 00:19:47.341 "auth": { 00:19:47.341 "state": "completed", 00:19:47.341 "digest": "sha256", 00:19:47.341 "dhgroup": "ffdhe2048" 00:19:47.341 } 00:19:47.341 } 00:19:47.341 ]' 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.341 01:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.599 01:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:19:47.599 01:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:19:48.533 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.533 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.533 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.533 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.533 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.533 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.533 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.533 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.791 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.049 00:19:49.307 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.307 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.307 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.565 { 00:19:49.565 "cntlid": 11, 00:19:49.565 "qid": 0, 00:19:49.565 "state": "enabled", 00:19:49.565 "thread": "nvmf_tgt_poll_group_000", 00:19:49.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:49.565 "listen_address": { 00:19:49.565 "trtype": "TCP", 00:19:49.565 "adrfam": "IPv4", 00:19:49.565 "traddr": "10.0.0.2", 00:19:49.565 "trsvcid": "4420" 00:19:49.565 }, 00:19:49.565 "peer_address": { 00:19:49.565 "trtype": "TCP", 00:19:49.565 "adrfam": "IPv4", 00:19:49.565 "traddr": "10.0.0.1", 00:19:49.565 "trsvcid": "57040" 00:19:49.565 }, 00:19:49.565 "auth": { 00:19:49.565 "state": "completed", 00:19:49.565 "digest": "sha256", 00:19:49.565 "dhgroup": "ffdhe2048" 00:19:49.565 } 00:19:49.565 } 00:19:49.565 ]' 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.565 01:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.565 01:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.565 01:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.565 01:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.823 01:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:19:49.823 01:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:19:50.756 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.756 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.756 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.756 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.756 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.756 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.756 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:50.756 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.014 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.580 00:19:51.580 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.580 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.580 01:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.838 { 00:19:51.838 "cntlid": 13, 00:19:51.838 "qid": 0, 00:19:51.838 "state": "enabled", 00:19:51.838 "thread": "nvmf_tgt_poll_group_000", 00:19:51.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:51.838 "listen_address": { 00:19:51.838 "trtype": "TCP", 00:19:51.838 "adrfam": "IPv4", 00:19:51.838 "traddr": "10.0.0.2", 00:19:51.838 "trsvcid": "4420" 00:19:51.838 }, 00:19:51.838 "peer_address": { 00:19:51.838 "trtype": "TCP", 00:19:51.838 "adrfam": "IPv4", 00:19:51.838 "traddr": "10.0.0.1", 00:19:51.838 "trsvcid": "57076" 00:19:51.838 }, 00:19:51.838 "auth": { 00:19:51.838 "state": "completed", 00:19:51.838 "digest": "sha256", 00:19:51.838 "dhgroup": "ffdhe2048" 00:19:51.838 } 00:19:51.838 } 00:19:51.838 ]' 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.838 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.096 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:19:52.096 01:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:19:53.027 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.027 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.027 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.027 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.027 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.027 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.027 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.027 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.284 01:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.541 00:19:53.799 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.799 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.799 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.056 { 00:19:54.056 "cntlid": 15, 00:19:54.056 "qid": 0, 00:19:54.056 "state": "enabled", 00:19:54.056 "thread": "nvmf_tgt_poll_group_000", 00:19:54.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:54.056 "listen_address": { 00:19:54.056 "trtype": "TCP", 00:19:54.056 "adrfam": "IPv4", 00:19:54.056 "traddr": "10.0.0.2", 00:19:54.056 "trsvcid": "4420" 00:19:54.056 }, 00:19:54.056 "peer_address": { 00:19:54.056 "trtype": "TCP", 00:19:54.056 "adrfam": "IPv4", 00:19:54.056 "traddr": "10.0.0.1", 00:19:54.056 "trsvcid": "41018" 00:19:54.056 }, 00:19:54.056 "auth": { 00:19:54.056 "state": "completed", 00:19:54.056 "digest": "sha256", 00:19:54.056 "dhgroup": "ffdhe2048" 00:19:54.056 } 00:19:54.056 } 00:19:54.056 ]' 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.056 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.314 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:19:54.314 01:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:19:55.248 01:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.248 01:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.248 01:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.248 01:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.248 01:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.248 01:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.248 01:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.248 01:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.248 01:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.506 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.073 00:19:56.073 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.073 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.073 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.331 { 00:19:56.331 "cntlid": 17, 00:19:56.331 "qid": 0, 00:19:56.331 "state": "enabled", 00:19:56.331 "thread": "nvmf_tgt_poll_group_000", 00:19:56.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:56.331 "listen_address": { 00:19:56.331 "trtype": "TCP", 00:19:56.331 "adrfam": "IPv4", 00:19:56.331 "traddr": "10.0.0.2", 00:19:56.331 "trsvcid": "4420" 00:19:56.331 }, 00:19:56.331 "peer_address": { 00:19:56.331 "trtype": "TCP", 00:19:56.331 "adrfam": "IPv4", 00:19:56.331 "traddr": "10.0.0.1", 00:19:56.331 "trsvcid": "41060" 00:19:56.331 }, 00:19:56.331 "auth": { 00:19:56.331 "state": "completed", 00:19:56.331 "digest": "sha256", 00:19:56.331 "dhgroup": "ffdhe3072" 00:19:56.331 } 00:19:56.331 } 00:19:56.331 ]' 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.331 01:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.589 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:19:56.589 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:19:57.522 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.522 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.522 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.522 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.522 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.522 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.522 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.522 01:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.780 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.038 00:19:58.295 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.295 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.295 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.553 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.553 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.553 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.553 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.553 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.553 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.553 { 00:19:58.553 "cntlid": 19, 00:19:58.553 "qid": 0, 00:19:58.553 "state": "enabled", 00:19:58.553 "thread": "nvmf_tgt_poll_group_000", 00:19:58.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:19:58.553 "listen_address": { 00:19:58.553 "trtype": "TCP", 00:19:58.553 "adrfam": "IPv4", 00:19:58.553 "traddr": "10.0.0.2", 00:19:58.553 "trsvcid": "4420" 00:19:58.553 }, 00:19:58.553 "peer_address": { 00:19:58.553 "trtype": "TCP", 00:19:58.553 "adrfam": "IPv4", 00:19:58.553 "traddr": "10.0.0.1", 00:19:58.553 "trsvcid": "41096" 00:19:58.553 }, 00:19:58.553 "auth": { 00:19:58.553 "state": "completed", 00:19:58.553 "digest": "sha256", 00:19:58.553 "dhgroup": "ffdhe3072" 00:19:58.553 } 00:19:58.553 } 00:19:58.554 ]' 00:19:58.554 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.554 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.554 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.554 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.554 01:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.554 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.554 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.554 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.811 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:19:58.811 01:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:19:59.745 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.745 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.745 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.745 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.745 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.745 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.745 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:59.745 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.003 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.569 00:20:00.569 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.569 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.569 01:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.827 { 00:20:00.827 "cntlid": 21, 00:20:00.827 "qid": 0, 00:20:00.827 "state": "enabled", 00:20:00.827 "thread": "nvmf_tgt_poll_group_000", 00:20:00.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:00.827 "listen_address": { 00:20:00.827 "trtype": "TCP", 00:20:00.827 "adrfam": "IPv4", 00:20:00.827 "traddr": "10.0.0.2", 00:20:00.827 "trsvcid": "4420" 00:20:00.827 }, 00:20:00.827 "peer_address": { 00:20:00.827 "trtype": "TCP", 00:20:00.827 "adrfam": "IPv4", 00:20:00.827 "traddr": "10.0.0.1", 00:20:00.827 "trsvcid": "41124" 00:20:00.827 }, 00:20:00.827 "auth": { 00:20:00.827 "state": "completed", 00:20:00.827 "digest": "sha256", 00:20:00.827 "dhgroup": "ffdhe3072" 00:20:00.827 } 00:20:00.827 } 00:20:00.827 ]' 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.827 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.085 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:01.085 01:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:02.017 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.017 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.017 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.017 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.017 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.017 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.017 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.017 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.275 01:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.840 00:20:02.840 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.840 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.840 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.098 { 00:20:03.098 "cntlid": 23, 00:20:03.098 "qid": 0, 00:20:03.098 "state": "enabled", 00:20:03.098 "thread": "nvmf_tgt_poll_group_000", 00:20:03.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:03.098 "listen_address": { 00:20:03.098 "trtype": "TCP", 00:20:03.098 "adrfam": "IPv4", 00:20:03.098 "traddr": "10.0.0.2", 00:20:03.098 "trsvcid": "4420" 00:20:03.098 }, 00:20:03.098 "peer_address": { 00:20:03.098 "trtype": "TCP", 00:20:03.098 "adrfam": "IPv4", 00:20:03.098 "traddr": "10.0.0.1", 00:20:03.098 "trsvcid": "41152" 00:20:03.098 }, 00:20:03.098 "auth": { 00:20:03.098 "state": "completed", 00:20:03.098 "digest": "sha256", 00:20:03.098 "dhgroup": "ffdhe3072" 00:20:03.098 } 00:20:03.098 } 00:20:03.098 ]' 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.098 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.356 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:03.356 01:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:04.289 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.289 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.289 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.289 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.290 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.290 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.290 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.290 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:04.290 01:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.548 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.113 00:20:05.113 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.113 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.113 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.371 { 00:20:05.371 "cntlid": 25, 00:20:05.371 "qid": 0, 00:20:05.371 "state": "enabled", 00:20:05.371 "thread": "nvmf_tgt_poll_group_000", 00:20:05.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:05.371 "listen_address": { 00:20:05.371 "trtype": "TCP", 00:20:05.371 "adrfam": "IPv4", 00:20:05.371 "traddr": "10.0.0.2", 00:20:05.371 "trsvcid": "4420" 00:20:05.371 }, 00:20:05.371 "peer_address": { 00:20:05.371 "trtype": "TCP", 00:20:05.371 "adrfam": "IPv4", 00:20:05.371 "traddr": "10.0.0.1", 00:20:05.371 "trsvcid": "39776" 00:20:05.371 }, 00:20:05.371 "auth": { 00:20:05.371 "state": "completed", 00:20:05.371 "digest": "sha256", 00:20:05.371 "dhgroup": "ffdhe4096" 00:20:05.371 } 00:20:05.371 } 00:20:05.371 ]' 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.371 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.372 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.372 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.372 01:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.630 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:05.630 01:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:06.562 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.562 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.562 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.562 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.562 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.562 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.562 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:06.562 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.128 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.386 00:20:07.386 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.386 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.386 01:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.644 { 00:20:07.644 "cntlid": 27, 00:20:07.644 "qid": 0, 00:20:07.644 "state": "enabled", 00:20:07.644 "thread": "nvmf_tgt_poll_group_000", 00:20:07.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:07.644 "listen_address": { 00:20:07.644 "trtype": "TCP", 00:20:07.644 "adrfam": "IPv4", 00:20:07.644 "traddr": "10.0.0.2", 00:20:07.644 "trsvcid": "4420" 00:20:07.644 }, 00:20:07.644 "peer_address": { 00:20:07.644 "trtype": "TCP", 00:20:07.644 "adrfam": "IPv4", 00:20:07.644 "traddr": "10.0.0.1", 00:20:07.644 "trsvcid": "39800" 00:20:07.644 }, 00:20:07.644 "auth": { 00:20:07.644 "state": "completed", 00:20:07.644 "digest": "sha256", 00:20:07.644 "dhgroup": "ffdhe4096" 00:20:07.644 } 00:20:07.644 } 00:20:07.644 ]' 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.644 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.902 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:07.902 01:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:08.924 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.924 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.924 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.924 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.924 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.924 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.924 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.924 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.211 01:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.777 00:20:09.777 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.777 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.777 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.035 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.035 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.035 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.035 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.035 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.035 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.035 { 00:20:10.035 "cntlid": 29, 00:20:10.035 "qid": 0, 00:20:10.035 "state": "enabled", 00:20:10.035 "thread": "nvmf_tgt_poll_group_000", 00:20:10.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:10.035 "listen_address": { 00:20:10.035 "trtype": "TCP", 00:20:10.035 "adrfam": "IPv4", 00:20:10.035 "traddr": "10.0.0.2", 00:20:10.035 "trsvcid": "4420" 00:20:10.035 }, 00:20:10.035 "peer_address": { 00:20:10.035 "trtype": "TCP", 00:20:10.035 "adrfam": "IPv4", 00:20:10.035 "traddr": "10.0.0.1", 00:20:10.035 "trsvcid": "39826" 00:20:10.035 }, 00:20:10.035 "auth": { 00:20:10.035 "state": "completed", 00:20:10.035 "digest": "sha256", 00:20:10.035 "dhgroup": "ffdhe4096" 00:20:10.035 } 00:20:10.035 } 00:20:10.035 ]' 00:20:10.035 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.035 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.035 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.293 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.293 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.293 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.293 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.293 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.550 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:10.550 01:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:11.483 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.483 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.483 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.483 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.483 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.483 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.483 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:11.483 01:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.741 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.306 00:20:12.306 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.306 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.306 01:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.568 { 00:20:12.568 "cntlid": 31, 00:20:12.568 "qid": 0, 00:20:12.568 "state": "enabled", 00:20:12.568 "thread": "nvmf_tgt_poll_group_000", 00:20:12.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:12.568 "listen_address": { 00:20:12.568 "trtype": "TCP", 00:20:12.568 "adrfam": "IPv4", 00:20:12.568 "traddr": "10.0.0.2", 00:20:12.568 "trsvcid": "4420" 00:20:12.568 }, 00:20:12.568 "peer_address": { 00:20:12.568 "trtype": "TCP", 00:20:12.568 "adrfam": "IPv4", 00:20:12.568 "traddr": "10.0.0.1", 00:20:12.568 "trsvcid": "39840" 00:20:12.568 }, 00:20:12.568 "auth": { 00:20:12.568 "state": "completed", 00:20:12.568 "digest": "sha256", 00:20:12.568 "dhgroup": "ffdhe4096" 00:20:12.568 } 00:20:12.568 } 00:20:12.568 ]' 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.568 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.133 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:13.133 01:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:14.066 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.066 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.066 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.066 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.066 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.066 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.066 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.066 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.066 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.324 01:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.890 00:20:14.890 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.890 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.890 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.148 { 00:20:15.148 "cntlid": 33, 00:20:15.148 "qid": 0, 00:20:15.148 "state": "enabled", 00:20:15.148 "thread": "nvmf_tgt_poll_group_000", 00:20:15.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:15.148 "listen_address": { 00:20:15.148 "trtype": "TCP", 00:20:15.148 "adrfam": "IPv4", 00:20:15.148 "traddr": "10.0.0.2", 00:20:15.148 "trsvcid": "4420" 00:20:15.148 }, 00:20:15.148 "peer_address": { 00:20:15.148 "trtype": "TCP", 00:20:15.148 "adrfam": "IPv4", 00:20:15.148 "traddr": "10.0.0.1", 00:20:15.148 "trsvcid": "39920" 00:20:15.148 }, 00:20:15.148 "auth": { 00:20:15.148 "state": "completed", 00:20:15.148 "digest": "sha256", 00:20:15.148 "dhgroup": "ffdhe6144" 00:20:15.148 } 00:20:15.148 } 00:20:15.148 ]' 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.148 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.149 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.149 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.407 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:15.407 01:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:16.340 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.340 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.340 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.340 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.340 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.340 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.340 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.340 01:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.906 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.472 00:20:17.472 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.472 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.472 01:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.730 { 00:20:17.730 "cntlid": 35, 00:20:17.730 "qid": 0, 00:20:17.730 "state": "enabled", 00:20:17.730 "thread": "nvmf_tgt_poll_group_000", 00:20:17.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:17.730 "listen_address": { 00:20:17.730 "trtype": "TCP", 00:20:17.730 "adrfam": "IPv4", 00:20:17.730 "traddr": "10.0.0.2", 00:20:17.730 "trsvcid": "4420" 00:20:17.730 }, 00:20:17.730 "peer_address": { 00:20:17.730 "trtype": "TCP", 00:20:17.730 "adrfam": "IPv4", 00:20:17.730 "traddr": "10.0.0.1", 00:20:17.730 "trsvcid": "39946" 00:20:17.730 }, 00:20:17.730 "auth": { 00:20:17.730 "state": "completed", 00:20:17.730 "digest": "sha256", 00:20:17.730 "dhgroup": "ffdhe6144" 00:20:17.730 } 00:20:17.730 } 00:20:17.730 ]' 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.730 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.988 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:17.988 01:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:18.922 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.922 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.922 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.922 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.922 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.922 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.922 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.922 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.180 01:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.746 00:20:19.746 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.746 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.746 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.004 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.004 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.004 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.004 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.004 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.004 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.004 { 00:20:20.004 "cntlid": 37, 00:20:20.004 "qid": 0, 00:20:20.004 "state": "enabled", 00:20:20.004 "thread": "nvmf_tgt_poll_group_000", 00:20:20.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:20.004 "listen_address": { 00:20:20.004 "trtype": "TCP", 00:20:20.004 "adrfam": "IPv4", 00:20:20.004 "traddr": "10.0.0.2", 00:20:20.004 "trsvcid": "4420" 00:20:20.004 }, 00:20:20.004 "peer_address": { 00:20:20.004 "trtype": "TCP", 00:20:20.004 "adrfam": "IPv4", 00:20:20.004 "traddr": "10.0.0.1", 00:20:20.004 "trsvcid": "39966" 00:20:20.004 }, 00:20:20.004 "auth": { 00:20:20.004 "state": "completed", 00:20:20.004 "digest": "sha256", 00:20:20.004 "dhgroup": "ffdhe6144" 00:20:20.004 } 00:20:20.004 } 00:20:20.004 ]' 00:20:20.004 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.262 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.262 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.262 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.262 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.262 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.262 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.262 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.520 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:20.520 01:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:21.453 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.453 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.453 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.453 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.453 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.453 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.453 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.453 01:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.711 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.277 00:20:22.277 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.277 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.277 01:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.533 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.533 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.533 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.533 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.533 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.533 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.533 { 00:20:22.533 "cntlid": 39, 00:20:22.533 "qid": 0, 00:20:22.533 "state": "enabled", 00:20:22.533 "thread": "nvmf_tgt_poll_group_000", 00:20:22.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:22.533 "listen_address": { 00:20:22.533 "trtype": "TCP", 00:20:22.533 "adrfam": "IPv4", 00:20:22.533 "traddr": "10.0.0.2", 00:20:22.533 "trsvcid": "4420" 00:20:22.533 }, 00:20:22.533 "peer_address": { 00:20:22.533 "trtype": "TCP", 00:20:22.533 "adrfam": "IPv4", 00:20:22.533 "traddr": "10.0.0.1", 00:20:22.533 "trsvcid": "39994" 00:20:22.533 }, 00:20:22.533 "auth": { 00:20:22.533 "state": "completed", 00:20:22.533 "digest": "sha256", 00:20:22.533 "dhgroup": "ffdhe6144" 00:20:22.533 } 00:20:22.533 } 00:20:22.533 ]' 00:20:22.533 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.533 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.533 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.790 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:22.790 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.790 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.790 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.790 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.047 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:23.047 01:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:23.980 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.980 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.980 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.980 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.980 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.980 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.980 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.980 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.980 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.238 01:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.171 00:20:25.171 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.171 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.171 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.429 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.429 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.429 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.429 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.429 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.429 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.429 { 00:20:25.429 "cntlid": 41, 00:20:25.429 "qid": 0, 00:20:25.429 "state": "enabled", 00:20:25.429 "thread": "nvmf_tgt_poll_group_000", 00:20:25.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.429 "listen_address": { 00:20:25.429 "trtype": "TCP", 00:20:25.429 "adrfam": "IPv4", 00:20:25.429 "traddr": "10.0.0.2", 00:20:25.429 "trsvcid": "4420" 00:20:25.429 }, 00:20:25.429 "peer_address": { 00:20:25.429 "trtype": "TCP", 00:20:25.429 "adrfam": "IPv4", 00:20:25.429 "traddr": "10.0.0.1", 00:20:25.429 "trsvcid": "41494" 00:20:25.429 }, 00:20:25.429 "auth": { 00:20:25.429 "state": "completed", 00:20:25.429 "digest": "sha256", 00:20:25.429 "dhgroup": "ffdhe8192" 00:20:25.429 } 00:20:25.429 } 00:20:25.429 ]' 00:20:25.429 01:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.686 01:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.686 01:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.686 01:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:25.686 01:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.686 01:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.686 01:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.686 01:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.943 01:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:25.944 01:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:26.877 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.877 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.877 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.877 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.877 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.877 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.877 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.877 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.134 01:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.067 00:20:28.067 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.067 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.067 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.325 { 00:20:28.325 "cntlid": 43, 00:20:28.325 "qid": 0, 00:20:28.325 "state": "enabled", 00:20:28.325 "thread": "nvmf_tgt_poll_group_000", 00:20:28.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:28.325 "listen_address": { 00:20:28.325 "trtype": "TCP", 00:20:28.325 "adrfam": "IPv4", 00:20:28.325 "traddr": "10.0.0.2", 00:20:28.325 "trsvcid": "4420" 00:20:28.325 }, 00:20:28.325 "peer_address": { 00:20:28.325 "trtype": "TCP", 00:20:28.325 "adrfam": "IPv4", 00:20:28.325 "traddr": "10.0.0.1", 00:20:28.325 "trsvcid": "41534" 00:20:28.325 }, 00:20:28.325 "auth": { 00:20:28.325 "state": "completed", 00:20:28.325 "digest": "sha256", 00:20:28.325 "dhgroup": "ffdhe8192" 00:20:28.325 } 00:20:28.325 } 00:20:28.325 ]' 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.325 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.582 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.582 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.582 01:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.840 01:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:28.840 01:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:29.770 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.770 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.770 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.770 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.770 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.770 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.770 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.770 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.028 01:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.960 00:20:30.960 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.960 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.960 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.218 { 00:20:31.218 "cntlid": 45, 00:20:31.218 "qid": 0, 00:20:31.218 "state": "enabled", 00:20:31.218 "thread": "nvmf_tgt_poll_group_000", 00:20:31.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:31.218 "listen_address": { 00:20:31.218 "trtype": "TCP", 00:20:31.218 "adrfam": "IPv4", 00:20:31.218 "traddr": "10.0.0.2", 00:20:31.218 "trsvcid": "4420" 00:20:31.218 }, 00:20:31.218 "peer_address": { 00:20:31.218 "trtype": "TCP", 00:20:31.218 "adrfam": "IPv4", 00:20:31.218 "traddr": "10.0.0.1", 00:20:31.218 "trsvcid": "41542" 00:20:31.218 }, 00:20:31.218 "auth": { 00:20:31.218 "state": "completed", 00:20:31.218 "digest": "sha256", 00:20:31.218 "dhgroup": "ffdhe8192" 00:20:31.218 } 00:20:31.218 } 00:20:31.218 ]' 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.218 01:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.783 01:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:31.783 01:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:32.715 01:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.715 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.715 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.715 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.715 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.715 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.715 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.715 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.972 01:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:33.904 00:20:33.904 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.904 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.904 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.162 { 00:20:34.162 "cntlid": 47, 00:20:34.162 "qid": 0, 00:20:34.162 "state": "enabled", 00:20:34.162 "thread": "nvmf_tgt_poll_group_000", 00:20:34.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.162 "listen_address": { 00:20:34.162 "trtype": "TCP", 00:20:34.162 "adrfam": "IPv4", 00:20:34.162 "traddr": "10.0.0.2", 00:20:34.162 "trsvcid": "4420" 00:20:34.162 }, 00:20:34.162 "peer_address": { 00:20:34.162 "trtype": "TCP", 00:20:34.162 "adrfam": "IPv4", 00:20:34.162 "traddr": "10.0.0.1", 00:20:34.162 "trsvcid": "41568" 00:20:34.162 }, 00:20:34.162 "auth": { 00:20:34.162 "state": "completed", 00:20:34.162 "digest": "sha256", 00:20:34.162 "dhgroup": "ffdhe8192" 00:20:34.162 } 00:20:34.162 } 00:20:34.162 ]' 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.162 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.420 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:34.420 01:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:35.353 01:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.612 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.177 00:20:36.177 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.177 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.177 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.435 { 00:20:36.435 "cntlid": 49, 00:20:36.435 "qid": 0, 00:20:36.435 "state": "enabled", 00:20:36.435 "thread": "nvmf_tgt_poll_group_000", 00:20:36.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:36.435 "listen_address": { 00:20:36.435 "trtype": "TCP", 00:20:36.435 "adrfam": "IPv4", 00:20:36.435 "traddr": "10.0.0.2", 00:20:36.435 "trsvcid": "4420" 00:20:36.435 }, 00:20:36.435 "peer_address": { 00:20:36.435 "trtype": "TCP", 00:20:36.435 "adrfam": "IPv4", 00:20:36.435 "traddr": "10.0.0.1", 00:20:36.435 "trsvcid": "39132" 00:20:36.435 }, 00:20:36.435 "auth": { 00:20:36.435 "state": "completed", 00:20:36.435 "digest": "sha384", 00:20:36.435 "dhgroup": "null" 00:20:36.435 } 00:20:36.435 } 00:20:36.435 ]' 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.435 01:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.693 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:36.693 01:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:37.627 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.627 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.627 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.627 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.627 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.627 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.627 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:37.627 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.193 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.451 00:20:38.451 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.451 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.451 01:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.728 { 00:20:38.728 "cntlid": 51, 00:20:38.728 "qid": 0, 00:20:38.728 "state": "enabled", 00:20:38.728 "thread": "nvmf_tgt_poll_group_000", 00:20:38.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.728 "listen_address": { 00:20:38.728 "trtype": "TCP", 00:20:38.728 "adrfam": "IPv4", 00:20:38.728 "traddr": "10.0.0.2", 00:20:38.728 "trsvcid": "4420" 00:20:38.728 }, 00:20:38.728 "peer_address": { 00:20:38.728 "trtype": "TCP", 00:20:38.728 "adrfam": "IPv4", 00:20:38.728 "traddr": "10.0.0.1", 00:20:38.728 "trsvcid": "39164" 00:20:38.728 }, 00:20:38.728 "auth": { 00:20:38.728 "state": "completed", 00:20:38.728 "digest": "sha384", 00:20:38.728 "dhgroup": "null" 00:20:38.728 } 00:20:38.728 } 00:20:38.728 ]' 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:38.728 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.022 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.022 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.022 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.022 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:39.022 01:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:39.957 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.216 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.216 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.216 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.216 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.216 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.216 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.216 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.474 01:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.732 00:20:40.732 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.732 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.732 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.990 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.990 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.990 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.990 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.990 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.990 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.990 { 00:20:40.990 "cntlid": 53, 00:20:40.990 "qid": 0, 00:20:40.990 "state": "enabled", 00:20:40.990 "thread": "nvmf_tgt_poll_group_000", 00:20:40.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:40.990 "listen_address": { 00:20:40.990 "trtype": "TCP", 00:20:40.990 "adrfam": "IPv4", 00:20:40.990 "traddr": "10.0.0.2", 00:20:40.990 "trsvcid": "4420" 00:20:40.990 }, 00:20:40.990 "peer_address": { 00:20:40.990 "trtype": "TCP", 00:20:40.990 "adrfam": "IPv4", 00:20:40.990 "traddr": "10.0.0.1", 00:20:40.990 "trsvcid": "39184" 00:20:40.990 }, 00:20:40.990 "auth": { 00:20:40.990 "state": "completed", 00:20:40.990 "digest": "sha384", 00:20:40.990 "dhgroup": "null" 00:20:40.990 } 00:20:40.990 } 00:20:40.990 ]' 00:20:40.990 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.990 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.990 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.248 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:41.248 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.248 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.248 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.248 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.505 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:41.505 01:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:42.439 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.439 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.439 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.439 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.439 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.439 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.439 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.439 01:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:42.696 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.697 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.955 00:20:42.955 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.955 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.955 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.213 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.213 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.213 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.213 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.213 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.213 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.213 { 00:20:43.213 "cntlid": 55, 00:20:43.213 "qid": 0, 00:20:43.213 "state": "enabled", 00:20:43.213 "thread": "nvmf_tgt_poll_group_000", 00:20:43.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.213 "listen_address": { 00:20:43.213 "trtype": "TCP", 00:20:43.213 "adrfam": "IPv4", 00:20:43.213 "traddr": "10.0.0.2", 00:20:43.213 "trsvcid": "4420" 00:20:43.213 }, 00:20:43.213 "peer_address": { 00:20:43.213 "trtype": "TCP", 00:20:43.213 "adrfam": "IPv4", 00:20:43.213 "traddr": "10.0.0.1", 00:20:43.213 "trsvcid": "39202" 00:20:43.213 }, 00:20:43.213 "auth": { 00:20:43.213 "state": "completed", 00:20:43.213 "digest": "sha384", 00:20:43.213 "dhgroup": "null" 00:20:43.213 } 00:20:43.213 } 00:20:43.213 ]' 00:20:43.213 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.471 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.471 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.471 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:43.471 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.471 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.471 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.471 01:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.729 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:43.729 01:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:44.662 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.662 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.662 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.662 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.662 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.662 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.662 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.662 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.662 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.921 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.487 00:20:45.487 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.487 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.487 01:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.745 { 00:20:45.745 "cntlid": 57, 00:20:45.745 "qid": 0, 00:20:45.745 "state": "enabled", 00:20:45.745 "thread": "nvmf_tgt_poll_group_000", 00:20:45.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.745 "listen_address": { 00:20:45.745 "trtype": "TCP", 00:20:45.745 "adrfam": "IPv4", 00:20:45.745 "traddr": "10.0.0.2", 00:20:45.745 "trsvcid": "4420" 00:20:45.745 }, 00:20:45.745 "peer_address": { 00:20:45.745 "trtype": "TCP", 00:20:45.745 "adrfam": "IPv4", 00:20:45.745 "traddr": "10.0.0.1", 00:20:45.745 "trsvcid": "46320" 00:20:45.745 }, 00:20:45.745 "auth": { 00:20:45.745 "state": "completed", 00:20:45.745 "digest": "sha384", 00:20:45.745 "dhgroup": "ffdhe2048" 00:20:45.745 } 00:20:45.745 } 00:20:45.745 ]' 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.745 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.004 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:46.004 01:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:46.937 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.195 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.195 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.195 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.195 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.195 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.195 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.195 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.453 01:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.711 00:20:47.711 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.711 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.711 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.969 { 00:20:47.969 "cntlid": 59, 00:20:47.969 "qid": 0, 00:20:47.969 "state": "enabled", 00:20:47.969 "thread": "nvmf_tgt_poll_group_000", 00:20:47.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.969 "listen_address": { 00:20:47.969 "trtype": "TCP", 00:20:47.969 "adrfam": "IPv4", 00:20:47.969 "traddr": "10.0.0.2", 00:20:47.969 "trsvcid": "4420" 00:20:47.969 }, 00:20:47.969 "peer_address": { 00:20:47.969 "trtype": "TCP", 00:20:47.969 "adrfam": "IPv4", 00:20:47.969 "traddr": "10.0.0.1", 00:20:47.969 "trsvcid": "46350" 00:20:47.969 }, 00:20:47.969 "auth": { 00:20:47.969 "state": "completed", 00:20:47.969 "digest": "sha384", 00:20:47.969 "dhgroup": "ffdhe2048" 00:20:47.969 } 00:20:47.969 } 00:20:47.969 ]' 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.969 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.534 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:48.534 01:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:49.466 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.466 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.467 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.467 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.467 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.467 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.467 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.467 01:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.725 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.983 00:20:49.983 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.983 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.983 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.241 { 00:20:50.241 "cntlid": 61, 00:20:50.241 "qid": 0, 00:20:50.241 "state": "enabled", 00:20:50.241 "thread": "nvmf_tgt_poll_group_000", 00:20:50.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:50.241 "listen_address": { 00:20:50.241 "trtype": "TCP", 00:20:50.241 "adrfam": "IPv4", 00:20:50.241 "traddr": "10.0.0.2", 00:20:50.241 "trsvcid": "4420" 00:20:50.241 }, 00:20:50.241 "peer_address": { 00:20:50.241 "trtype": "TCP", 00:20:50.241 "adrfam": "IPv4", 00:20:50.241 "traddr": "10.0.0.1", 00:20:50.241 "trsvcid": "46396" 00:20:50.241 }, 00:20:50.241 "auth": { 00:20:50.241 "state": "completed", 00:20:50.241 "digest": "sha384", 00:20:50.241 "dhgroup": "ffdhe2048" 00:20:50.241 } 00:20:50.241 } 00:20:50.241 ]' 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.241 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.498 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.498 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.499 01:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.757 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:50.757 01:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:20:51.690 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.690 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.690 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.690 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.690 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.690 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.690 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.690 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.948 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.206 00:20:52.206 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.206 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.206 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.464 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.464 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.464 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.464 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.464 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.464 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.464 { 00:20:52.464 "cntlid": 63, 00:20:52.464 "qid": 0, 00:20:52.464 "state": "enabled", 00:20:52.464 "thread": "nvmf_tgt_poll_group_000", 00:20:52.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.464 "listen_address": { 00:20:52.464 "trtype": "TCP", 00:20:52.464 "adrfam": "IPv4", 00:20:52.464 "traddr": "10.0.0.2", 00:20:52.464 "trsvcid": "4420" 00:20:52.464 }, 00:20:52.464 "peer_address": { 00:20:52.464 "trtype": "TCP", 00:20:52.464 "adrfam": "IPv4", 00:20:52.464 "traddr": "10.0.0.1", 00:20:52.464 "trsvcid": "46432" 00:20:52.464 }, 00:20:52.464 "auth": { 00:20:52.464 "state": "completed", 00:20:52.464 "digest": "sha384", 00:20:52.464 "dhgroup": "ffdhe2048" 00:20:52.464 } 00:20:52.464 } 00:20:52.464 ]' 00:20:52.464 01:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.464 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.464 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.464 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.721 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.721 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.721 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.721 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.978 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:52.978 01:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:20:53.911 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.911 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.911 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.911 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.911 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.911 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.911 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.911 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.911 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.169 01:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.735 00:20:54.735 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.735 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.735 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.993 { 00:20:54.993 "cntlid": 65, 00:20:54.993 "qid": 0, 00:20:54.993 "state": "enabled", 00:20:54.993 "thread": "nvmf_tgt_poll_group_000", 00:20:54.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.993 "listen_address": { 00:20:54.993 "trtype": "TCP", 00:20:54.993 "adrfam": "IPv4", 00:20:54.993 "traddr": "10.0.0.2", 00:20:54.993 "trsvcid": "4420" 00:20:54.993 }, 00:20:54.993 "peer_address": { 00:20:54.993 "trtype": "TCP", 00:20:54.993 "adrfam": "IPv4", 00:20:54.993 "traddr": "10.0.0.1", 00:20:54.993 "trsvcid": "46150" 00:20:54.993 }, 00:20:54.993 "auth": { 00:20:54.993 "state": "completed", 00:20:54.993 "digest": "sha384", 00:20:54.993 "dhgroup": "ffdhe3072" 00:20:54.993 } 00:20:54.993 } 00:20:54.993 ]' 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.993 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.251 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:55.251 01:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:20:56.185 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.443 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.443 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.443 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.443 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.443 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.443 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.443 01:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.701 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.959 00:20:56.959 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.959 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.959 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.217 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.217 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.217 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.217 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.217 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.217 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.217 { 00:20:57.217 "cntlid": 67, 00:20:57.217 "qid": 0, 00:20:57.217 "state": "enabled", 00:20:57.217 "thread": "nvmf_tgt_poll_group_000", 00:20:57.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.217 "listen_address": { 00:20:57.217 "trtype": "TCP", 00:20:57.217 "adrfam": "IPv4", 00:20:57.217 "traddr": "10.0.0.2", 00:20:57.217 "trsvcid": "4420" 00:20:57.217 }, 00:20:57.217 "peer_address": { 00:20:57.217 "trtype": "TCP", 00:20:57.217 "adrfam": "IPv4", 00:20:57.217 "traddr": "10.0.0.1", 00:20:57.217 "trsvcid": "46176" 00:20:57.217 }, 00:20:57.217 "auth": { 00:20:57.217 "state": "completed", 00:20:57.217 "digest": "sha384", 00:20:57.217 "dhgroup": "ffdhe3072" 00:20:57.217 } 00:20:57.217 } 00:20:57.217 ]' 00:20:57.217 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.475 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.475 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.475 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.475 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.475 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.475 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.475 01:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.733 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:57.733 01:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:20:58.666 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.666 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.666 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.666 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.666 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.666 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.666 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.666 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.924 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.489 00:20:59.489 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.489 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.489 01:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.746 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.747 { 00:20:59.747 "cntlid": 69, 00:20:59.747 "qid": 0, 00:20:59.747 "state": "enabled", 00:20:59.747 "thread": "nvmf_tgt_poll_group_000", 00:20:59.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.747 "listen_address": { 00:20:59.747 "trtype": "TCP", 00:20:59.747 "adrfam": "IPv4", 00:20:59.747 "traddr": "10.0.0.2", 00:20:59.747 "trsvcid": "4420" 00:20:59.747 }, 00:20:59.747 "peer_address": { 00:20:59.747 "trtype": "TCP", 00:20:59.747 "adrfam": "IPv4", 00:20:59.747 "traddr": "10.0.0.1", 00:20:59.747 "trsvcid": "46212" 00:20:59.747 }, 00:20:59.747 "auth": { 00:20:59.747 "state": "completed", 00:20:59.747 "digest": "sha384", 00:20:59.747 "dhgroup": "ffdhe3072" 00:20:59.747 } 00:20:59.747 } 00:20:59.747 ]' 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.747 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.004 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:00.004 01:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:00.937 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.937 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.937 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.937 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.937 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.937 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.937 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:00.937 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.195 01:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.761 00:21:01.761 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.761 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.761 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.019 { 00:21:02.019 "cntlid": 71, 00:21:02.019 "qid": 0, 00:21:02.019 "state": "enabled", 00:21:02.019 "thread": "nvmf_tgt_poll_group_000", 00:21:02.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:02.019 "listen_address": { 00:21:02.019 "trtype": "TCP", 00:21:02.019 "adrfam": "IPv4", 00:21:02.019 "traddr": "10.0.0.2", 00:21:02.019 "trsvcid": "4420" 00:21:02.019 }, 00:21:02.019 "peer_address": { 00:21:02.019 "trtype": "TCP", 00:21:02.019 "adrfam": "IPv4", 00:21:02.019 "traddr": "10.0.0.1", 00:21:02.019 "trsvcid": "46242" 00:21:02.019 }, 00:21:02.019 "auth": { 00:21:02.019 "state": "completed", 00:21:02.019 "digest": "sha384", 00:21:02.019 "dhgroup": "ffdhe3072" 00:21:02.019 } 00:21:02.019 } 00:21:02.019 ]' 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.019 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.277 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:02.277 01:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:03.210 01:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.210 01:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.210 01:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.210 01:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.468 01:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.468 01:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.468 01:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.468 01:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.468 01:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.726 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.984 00:21:03.984 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.984 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.984 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.242 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.242 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.242 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.242 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.242 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.242 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.242 { 00:21:04.242 "cntlid": 73, 00:21:04.242 "qid": 0, 00:21:04.242 "state": "enabled", 00:21:04.242 "thread": "nvmf_tgt_poll_group_000", 00:21:04.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.242 "listen_address": { 00:21:04.242 "trtype": "TCP", 00:21:04.242 "adrfam": "IPv4", 00:21:04.242 "traddr": "10.0.0.2", 00:21:04.242 "trsvcid": "4420" 00:21:04.242 }, 00:21:04.242 "peer_address": { 00:21:04.242 "trtype": "TCP", 00:21:04.242 "adrfam": "IPv4", 00:21:04.242 "traddr": "10.0.0.1", 00:21:04.242 "trsvcid": "47806" 00:21:04.242 }, 00:21:04.242 "auth": { 00:21:04.242 "state": "completed", 00:21:04.242 "digest": "sha384", 00:21:04.242 "dhgroup": "ffdhe4096" 00:21:04.242 } 00:21:04.242 } 00:21:04.242 ]' 00:21:04.242 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.501 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.501 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.501 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.501 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.501 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.501 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.501 01:31:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.759 01:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:04.759 01:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:05.692 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.692 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.692 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.692 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.692 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.692 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.692 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.692 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.950 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:05.950 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.950 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.950 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.950 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:05.950 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.208 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.208 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.208 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.208 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.208 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.208 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.208 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.466 00:21:06.466 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.466 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.466 01:31:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.724 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.724 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.724 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.724 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.724 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.724 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.724 { 00:21:06.724 "cntlid": 75, 00:21:06.724 "qid": 0, 00:21:06.724 "state": "enabled", 00:21:06.724 "thread": "nvmf_tgt_poll_group_000", 00:21:06.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:06.724 "listen_address": { 00:21:06.724 "trtype": "TCP", 00:21:06.724 "adrfam": "IPv4", 00:21:06.724 "traddr": "10.0.0.2", 00:21:06.724 "trsvcid": "4420" 00:21:06.724 }, 00:21:06.724 "peer_address": { 00:21:06.724 "trtype": "TCP", 00:21:06.724 "adrfam": "IPv4", 00:21:06.724 "traddr": "10.0.0.1", 00:21:06.724 "trsvcid": "47836" 00:21:06.724 }, 00:21:06.724 "auth": { 00:21:06.724 "state": "completed", 00:21:06.724 "digest": "sha384", 00:21:06.724 "dhgroup": "ffdhe4096" 00:21:06.724 } 00:21:06.724 } 00:21:06.724 ]' 00:21:06.724 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.724 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.724 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.981 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.981 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.981 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.981 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.981 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.239 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:07.239 01:31:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:08.172 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.172 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.172 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.172 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.172 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.172 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.172 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.172 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.460 01:31:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.742 00:21:08.742 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.742 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.743 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.000 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.000 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.000 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.000 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.000 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.000 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.000 { 00:21:09.000 "cntlid": 77, 00:21:09.000 "qid": 0, 00:21:09.000 "state": "enabled", 00:21:09.000 "thread": "nvmf_tgt_poll_group_000", 00:21:09.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.000 "listen_address": { 00:21:09.000 "trtype": "TCP", 00:21:09.000 "adrfam": "IPv4", 00:21:09.000 "traddr": "10.0.0.2", 00:21:09.000 "trsvcid": "4420" 00:21:09.000 }, 00:21:09.000 "peer_address": { 00:21:09.000 "trtype": "TCP", 00:21:09.000 "adrfam": "IPv4", 00:21:09.000 "traddr": "10.0.0.1", 00:21:09.000 "trsvcid": "47858" 00:21:09.000 }, 00:21:09.000 "auth": { 00:21:09.000 "state": "completed", 00:21:09.000 "digest": "sha384", 00:21:09.000 "dhgroup": "ffdhe4096" 00:21:09.000 } 00:21:09.000 } 00:21:09.000 ]' 00:21:09.000 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.258 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.258 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.258 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.258 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.258 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.258 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.258 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.516 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:09.516 01:31:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:10.449 01:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.449 01:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.449 01:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.449 01:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.449 01:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.449 01:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.449 01:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:10.449 01:31:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.707 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.273 00:21:11.273 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.273 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.273 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.531 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.531 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.531 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.531 01:31:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.531 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.531 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.531 { 00:21:11.531 "cntlid": 79, 00:21:11.531 "qid": 0, 00:21:11.531 "state": "enabled", 00:21:11.531 "thread": "nvmf_tgt_poll_group_000", 00:21:11.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:11.531 "listen_address": { 00:21:11.531 "trtype": "TCP", 00:21:11.531 "adrfam": "IPv4", 00:21:11.531 "traddr": "10.0.0.2", 00:21:11.531 "trsvcid": "4420" 00:21:11.531 }, 00:21:11.531 "peer_address": { 00:21:11.531 "trtype": "TCP", 00:21:11.531 "adrfam": "IPv4", 00:21:11.531 "traddr": "10.0.0.1", 00:21:11.531 "trsvcid": "47890" 00:21:11.531 }, 00:21:11.531 "auth": { 00:21:11.531 "state": "completed", 00:21:11.531 "digest": "sha384", 00:21:11.531 "dhgroup": "ffdhe4096" 00:21:11.531 } 00:21:11.531 } 00:21:11.531 ]' 00:21:11.531 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.531 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.531 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.531 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:11.531 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.789 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.789 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.789 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.047 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:12.047 01:31:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:12.978 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.978 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.978 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.978 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.978 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.978 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.978 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.978 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.978 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.236 01:31:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.801 00:21:13.801 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.801 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.801 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.059 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.059 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.059 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.059 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.059 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.059 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.059 { 00:21:14.059 "cntlid": 81, 00:21:14.059 "qid": 0, 00:21:14.059 "state": "enabled", 00:21:14.059 "thread": "nvmf_tgt_poll_group_000", 00:21:14.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.059 "listen_address": { 00:21:14.059 "trtype": "TCP", 00:21:14.059 "adrfam": "IPv4", 00:21:14.059 "traddr": "10.0.0.2", 00:21:14.059 "trsvcid": "4420" 00:21:14.059 }, 00:21:14.059 "peer_address": { 00:21:14.059 "trtype": "TCP", 00:21:14.059 "adrfam": "IPv4", 00:21:14.059 "traddr": "10.0.0.1", 00:21:14.059 "trsvcid": "60372" 00:21:14.059 }, 00:21:14.059 "auth": { 00:21:14.059 "state": "completed", 00:21:14.059 "digest": "sha384", 00:21:14.059 "dhgroup": "ffdhe6144" 00:21:14.059 } 00:21:14.059 } 00:21:14.059 ]' 00:21:14.059 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.318 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.318 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.318 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:14.318 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.318 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.318 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.318 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.576 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:14.576 01:31:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:15.508 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.508 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.508 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.508 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.508 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.508 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.508 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.508 01:32:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.766 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.767 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:16.331 00:21:16.331 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.331 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.332 01:32:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.589 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.589 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.589 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.589 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.589 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.589 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.589 { 00:21:16.589 "cntlid": 83, 00:21:16.589 "qid": 0, 00:21:16.589 "state": "enabled", 00:21:16.589 "thread": "nvmf_tgt_poll_group_000", 00:21:16.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.589 "listen_address": { 00:21:16.589 "trtype": "TCP", 00:21:16.589 "adrfam": "IPv4", 00:21:16.589 "traddr": "10.0.0.2", 00:21:16.589 "trsvcid": "4420" 00:21:16.589 }, 00:21:16.589 "peer_address": { 00:21:16.589 "trtype": "TCP", 00:21:16.589 "adrfam": "IPv4", 00:21:16.589 "traddr": "10.0.0.1", 00:21:16.589 "trsvcid": "60394" 00:21:16.589 }, 00:21:16.589 "auth": { 00:21:16.589 "state": "completed", 00:21:16.589 "digest": "sha384", 00:21:16.589 "dhgroup": "ffdhe6144" 00:21:16.589 } 00:21:16.589 } 00:21:16.589 ]' 00:21:16.589 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.846 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.846 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.846 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.846 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.846 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.846 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.846 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.104 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:17.104 01:32:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:18.038 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.038 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.038 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.038 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.038 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.038 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.038 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.038 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:18.296 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:18.296 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.296 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.296 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:18.296 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:18.296 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.296 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.296 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.296 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.554 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.554 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.554 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.554 01:32:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.120 00:21:19.120 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.120 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.120 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.378 { 00:21:19.378 "cntlid": 85, 00:21:19.378 "qid": 0, 00:21:19.378 "state": "enabled", 00:21:19.378 "thread": "nvmf_tgt_poll_group_000", 00:21:19.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:19.378 "listen_address": { 00:21:19.378 "trtype": "TCP", 00:21:19.378 "adrfam": "IPv4", 00:21:19.378 "traddr": "10.0.0.2", 00:21:19.378 "trsvcid": "4420" 00:21:19.378 }, 00:21:19.378 "peer_address": { 00:21:19.378 "trtype": "TCP", 00:21:19.378 "adrfam": "IPv4", 00:21:19.378 "traddr": "10.0.0.1", 00:21:19.378 "trsvcid": "60420" 00:21:19.378 }, 00:21:19.378 "auth": { 00:21:19.378 "state": "completed", 00:21:19.378 "digest": "sha384", 00:21:19.378 "dhgroup": "ffdhe6144" 00:21:19.378 } 00:21:19.378 } 00:21:19.378 ]' 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.378 01:32:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.636 01:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:19.636 01:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:20.570 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.828 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.828 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.828 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.828 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.828 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.828 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.828 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.086 01:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.651 00:21:21.651 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.651 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.651 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.909 { 00:21:21.909 "cntlid": 87, 00:21:21.909 "qid": 0, 00:21:21.909 "state": "enabled", 00:21:21.909 "thread": "nvmf_tgt_poll_group_000", 00:21:21.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.909 "listen_address": { 00:21:21.909 "trtype": "TCP", 00:21:21.909 "adrfam": "IPv4", 00:21:21.909 "traddr": "10.0.0.2", 00:21:21.909 "trsvcid": "4420" 00:21:21.909 }, 00:21:21.909 "peer_address": { 00:21:21.909 "trtype": "TCP", 00:21:21.909 "adrfam": "IPv4", 00:21:21.909 "traddr": "10.0.0.1", 00:21:21.909 "trsvcid": "60448" 00:21:21.909 }, 00:21:21.909 "auth": { 00:21:21.909 "state": "completed", 00:21:21.909 "digest": "sha384", 00:21:21.909 "dhgroup": "ffdhe6144" 00:21:21.909 } 00:21:21.909 } 00:21:21.909 ]' 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.909 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.167 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:22.167 01:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:23.100 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.100 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.100 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.100 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.100 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.100 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.100 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.100 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.100 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.665 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.666 01:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.231 00:21:24.489 01:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.489 01:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.489 01:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.746 { 00:21:24.746 "cntlid": 89, 00:21:24.746 "qid": 0, 00:21:24.746 "state": "enabled", 00:21:24.746 "thread": "nvmf_tgt_poll_group_000", 00:21:24.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:24.746 "listen_address": { 00:21:24.746 "trtype": "TCP", 00:21:24.746 "adrfam": "IPv4", 00:21:24.746 "traddr": "10.0.0.2", 00:21:24.746 "trsvcid": "4420" 00:21:24.746 }, 00:21:24.746 "peer_address": { 00:21:24.746 "trtype": "TCP", 00:21:24.746 "adrfam": "IPv4", 00:21:24.746 "traddr": "10.0.0.1", 00:21:24.746 "trsvcid": "38898" 00:21:24.746 }, 00:21:24.746 "auth": { 00:21:24.746 "state": "completed", 00:21:24.746 "digest": "sha384", 00:21:24.746 "dhgroup": "ffdhe8192" 00:21:24.746 } 00:21:24.746 } 00:21:24.746 ]' 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.746 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.025 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:25.025 01:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:25.958 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.958 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:25.958 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.958 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.958 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.958 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.958 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:25.958 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.216 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:26.216 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.216 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.216 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:26.216 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.216 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.216 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.216 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.216 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.474 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.474 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.474 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.474 01:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.406 00:21:27.406 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.406 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.406 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.406 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.406 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.406 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.406 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.664 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.664 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.664 { 00:21:27.664 "cntlid": 91, 00:21:27.664 "qid": 0, 00:21:27.664 "state": "enabled", 00:21:27.664 "thread": "nvmf_tgt_poll_group_000", 00:21:27.664 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.664 "listen_address": { 00:21:27.664 "trtype": "TCP", 00:21:27.664 "adrfam": "IPv4", 00:21:27.664 "traddr": "10.0.0.2", 00:21:27.664 "trsvcid": "4420" 00:21:27.664 }, 00:21:27.664 "peer_address": { 00:21:27.664 "trtype": "TCP", 00:21:27.664 "adrfam": "IPv4", 00:21:27.664 "traddr": "10.0.0.1", 00:21:27.664 "trsvcid": "38938" 00:21:27.664 }, 00:21:27.664 "auth": { 00:21:27.664 "state": "completed", 00:21:27.664 "digest": "sha384", 00:21:27.664 "dhgroup": "ffdhe8192" 00:21:27.664 } 00:21:27.664 } 00:21:27.664 ]' 00:21:27.664 01:32:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.664 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.664 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.664 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.664 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.664 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.664 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.664 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.922 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:27.922 01:32:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:28.854 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.854 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.854 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.854 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.854 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.854 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.854 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.854 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.420 01:32:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.352 00:21:30.352 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.352 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.352 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.352 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.352 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.352 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.352 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.352 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.352 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.352 { 00:21:30.352 "cntlid": 93, 00:21:30.352 "qid": 0, 00:21:30.352 "state": "enabled", 00:21:30.352 "thread": "nvmf_tgt_poll_group_000", 00:21:30.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:30.352 "listen_address": { 00:21:30.352 "trtype": "TCP", 00:21:30.353 "adrfam": "IPv4", 00:21:30.353 "traddr": "10.0.0.2", 00:21:30.353 "trsvcid": "4420" 00:21:30.353 }, 00:21:30.353 "peer_address": { 00:21:30.353 "trtype": "TCP", 00:21:30.353 "adrfam": "IPv4", 00:21:30.353 "traddr": "10.0.0.1", 00:21:30.353 "trsvcid": "38954" 00:21:30.353 }, 00:21:30.353 "auth": { 00:21:30.353 "state": "completed", 00:21:30.353 "digest": "sha384", 00:21:30.353 "dhgroup": "ffdhe8192" 00:21:30.353 } 00:21:30.353 } 00:21:30.353 ]' 00:21:30.353 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.353 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:30.353 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.610 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.610 01:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.610 01:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.610 01:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.610 01:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.868 01:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:30.868 01:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:31.801 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.801 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.801 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.801 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.801 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.801 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.801 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:31.801 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.058 01:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.990 00:21:32.990 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.990 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.990 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.248 { 00:21:33.248 "cntlid": 95, 00:21:33.248 "qid": 0, 00:21:33.248 "state": "enabled", 00:21:33.248 "thread": "nvmf_tgt_poll_group_000", 00:21:33.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:33.248 "listen_address": { 00:21:33.248 "trtype": "TCP", 00:21:33.248 "adrfam": "IPv4", 00:21:33.248 "traddr": "10.0.0.2", 00:21:33.248 "trsvcid": "4420" 00:21:33.248 }, 00:21:33.248 "peer_address": { 00:21:33.248 "trtype": "TCP", 00:21:33.248 "adrfam": "IPv4", 00:21:33.248 "traddr": "10.0.0.1", 00:21:33.248 "trsvcid": "38980" 00:21:33.248 }, 00:21:33.248 "auth": { 00:21:33.248 "state": "completed", 00:21:33.248 "digest": "sha384", 00:21:33.248 "dhgroup": "ffdhe8192" 00:21:33.248 } 00:21:33.248 } 00:21:33.248 ]' 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.248 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.506 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.506 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.506 01:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.763 01:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:33.763 01:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.697 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.955 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.213 00:21:35.213 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.213 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.213 01:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.470 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.470 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.470 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.470 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.470 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.470 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.470 { 00:21:35.470 "cntlid": 97, 00:21:35.470 "qid": 0, 00:21:35.470 "state": "enabled", 00:21:35.470 "thread": "nvmf_tgt_poll_group_000", 00:21:35.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:35.470 "listen_address": { 00:21:35.470 "trtype": "TCP", 00:21:35.470 "adrfam": "IPv4", 00:21:35.470 "traddr": "10.0.0.2", 00:21:35.470 "trsvcid": "4420" 00:21:35.470 }, 00:21:35.470 "peer_address": { 00:21:35.470 "trtype": "TCP", 00:21:35.470 "adrfam": "IPv4", 00:21:35.470 "traddr": "10.0.0.1", 00:21:35.470 "trsvcid": "39528" 00:21:35.470 }, 00:21:35.470 "auth": { 00:21:35.470 "state": "completed", 00:21:35.470 "digest": "sha512", 00:21:35.470 "dhgroup": "null" 00:21:35.470 } 00:21:35.470 } 00:21:35.470 ]' 00:21:35.728 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.728 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.728 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.728 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:35.728 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.728 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.728 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.728 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.986 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:35.986 01:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:36.919 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.919 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.919 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.919 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.919 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.919 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.919 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:36.919 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.177 01:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.440 00:21:37.440 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.440 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.440 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.747 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.747 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.747 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.747 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.747 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.747 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.747 { 00:21:37.747 "cntlid": 99, 00:21:37.747 "qid": 0, 00:21:37.747 "state": "enabled", 00:21:37.747 "thread": "nvmf_tgt_poll_group_000", 00:21:37.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:37.747 "listen_address": { 00:21:37.747 "trtype": "TCP", 00:21:37.747 "adrfam": "IPv4", 00:21:37.747 "traddr": "10.0.0.2", 00:21:37.747 "trsvcid": "4420" 00:21:37.747 }, 00:21:37.747 "peer_address": { 00:21:37.747 "trtype": "TCP", 00:21:37.747 "adrfam": "IPv4", 00:21:37.747 "traddr": "10.0.0.1", 00:21:37.747 "trsvcid": "39554" 00:21:37.747 }, 00:21:37.747 "auth": { 00:21:37.747 "state": "completed", 00:21:37.747 "digest": "sha512", 00:21:37.747 "dhgroup": "null" 00:21:37.747 } 00:21:37.747 } 00:21:37.747 ]' 00:21:37.747 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.023 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.023 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.023 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.023 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.023 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.023 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.023 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.286 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:38.286 01:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:39.226 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.226 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.226 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.226 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.226 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.226 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.226 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.226 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.484 01:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.741 00:21:39.741 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.741 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.741 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.999 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.999 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.999 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.999 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.257 { 00:21:40.257 "cntlid": 101, 00:21:40.257 "qid": 0, 00:21:40.257 "state": "enabled", 00:21:40.257 "thread": "nvmf_tgt_poll_group_000", 00:21:40.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:40.257 "listen_address": { 00:21:40.257 "trtype": "TCP", 00:21:40.257 "adrfam": "IPv4", 00:21:40.257 "traddr": "10.0.0.2", 00:21:40.257 "trsvcid": "4420" 00:21:40.257 }, 00:21:40.257 "peer_address": { 00:21:40.257 "trtype": "TCP", 00:21:40.257 "adrfam": "IPv4", 00:21:40.257 "traddr": "10.0.0.1", 00:21:40.257 "trsvcid": "39588" 00:21:40.257 }, 00:21:40.257 "auth": { 00:21:40.257 "state": "completed", 00:21:40.257 "digest": "sha512", 00:21:40.257 "dhgroup": "null" 00:21:40.257 } 00:21:40.257 } 00:21:40.257 ]' 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.257 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.514 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:40.515 01:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:41.448 01:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.448 01:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.448 01:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.448 01:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.448 01:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.448 01:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.448 01:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.448 01:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.706 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.964 00:21:42.221 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.221 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.221 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.478 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.478 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.478 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.478 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.478 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.479 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.479 { 00:21:42.479 "cntlid": 103, 00:21:42.479 "qid": 0, 00:21:42.479 "state": "enabled", 00:21:42.479 "thread": "nvmf_tgt_poll_group_000", 00:21:42.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:42.479 "listen_address": { 00:21:42.479 "trtype": "TCP", 00:21:42.479 "adrfam": "IPv4", 00:21:42.479 "traddr": "10.0.0.2", 00:21:42.479 "trsvcid": "4420" 00:21:42.479 }, 00:21:42.479 "peer_address": { 00:21:42.479 "trtype": "TCP", 00:21:42.479 "adrfam": "IPv4", 00:21:42.479 "traddr": "10.0.0.1", 00:21:42.479 "trsvcid": "39622" 00:21:42.479 }, 00:21:42.479 "auth": { 00:21:42.479 "state": "completed", 00:21:42.479 "digest": "sha512", 00:21:42.479 "dhgroup": "null" 00:21:42.479 } 00:21:42.479 } 00:21:42.479 ]' 00:21:42.479 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.479 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.479 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.479 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.479 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.479 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.479 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.479 01:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.735 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:42.735 01:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:43.666 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.666 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.666 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.666 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.666 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.666 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.666 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.666 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.666 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.924 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.490 00:21:44.490 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.490 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.490 01:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.748 { 00:21:44.748 "cntlid": 105, 00:21:44.748 "qid": 0, 00:21:44.748 "state": "enabled", 00:21:44.748 "thread": "nvmf_tgt_poll_group_000", 00:21:44.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:44.748 "listen_address": { 00:21:44.748 "trtype": "TCP", 00:21:44.748 "adrfam": "IPv4", 00:21:44.748 "traddr": "10.0.0.2", 00:21:44.748 "trsvcid": "4420" 00:21:44.748 }, 00:21:44.748 "peer_address": { 00:21:44.748 "trtype": "TCP", 00:21:44.748 "adrfam": "IPv4", 00:21:44.748 "traddr": "10.0.0.1", 00:21:44.748 "trsvcid": "41918" 00:21:44.748 }, 00:21:44.748 "auth": { 00:21:44.748 "state": "completed", 00:21:44.748 "digest": "sha512", 00:21:44.748 "dhgroup": "ffdhe2048" 00:21:44.748 } 00:21:44.748 } 00:21:44.748 ]' 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.748 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.005 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:45.005 01:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.377 01:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.635 00:21:46.635 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.635 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.635 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.893 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.893 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.893 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.893 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.893 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.893 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.893 { 00:21:46.893 "cntlid": 107, 00:21:46.893 "qid": 0, 00:21:46.893 "state": "enabled", 00:21:46.893 "thread": "nvmf_tgt_poll_group_000", 00:21:46.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.893 "listen_address": { 00:21:46.893 "trtype": "TCP", 00:21:46.893 "adrfam": "IPv4", 00:21:46.893 "traddr": "10.0.0.2", 00:21:46.893 "trsvcid": "4420" 00:21:46.893 }, 00:21:46.893 "peer_address": { 00:21:46.893 "trtype": "TCP", 00:21:46.893 "adrfam": "IPv4", 00:21:46.893 "traddr": "10.0.0.1", 00:21:46.893 "trsvcid": "41934" 00:21:46.893 }, 00:21:46.893 "auth": { 00:21:46.893 "state": "completed", 00:21:46.893 "digest": "sha512", 00:21:46.893 "dhgroup": "ffdhe2048" 00:21:46.893 } 00:21:46.893 } 00:21:46.893 ]' 00:21:46.893 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.150 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.151 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.151 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.151 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.151 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.151 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.151 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.408 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:47.408 01:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:48.342 01:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.342 01:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.342 01:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.342 01:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.342 01:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.342 01:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.342 01:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.342 01:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.600 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.165 00:21:49.165 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.165 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.165 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.423 { 00:21:49.423 "cntlid": 109, 00:21:49.423 "qid": 0, 00:21:49.423 "state": "enabled", 00:21:49.423 "thread": "nvmf_tgt_poll_group_000", 00:21:49.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:49.423 "listen_address": { 00:21:49.423 "trtype": "TCP", 00:21:49.423 "adrfam": "IPv4", 00:21:49.423 "traddr": "10.0.0.2", 00:21:49.423 "trsvcid": "4420" 00:21:49.423 }, 00:21:49.423 "peer_address": { 00:21:49.423 "trtype": "TCP", 00:21:49.423 "adrfam": "IPv4", 00:21:49.423 "traddr": "10.0.0.1", 00:21:49.423 "trsvcid": "41960" 00:21:49.423 }, 00:21:49.423 "auth": { 00:21:49.423 "state": "completed", 00:21:49.423 "digest": "sha512", 00:21:49.423 "dhgroup": "ffdhe2048" 00:21:49.423 } 00:21:49.423 } 00:21:49.423 ]' 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.423 01:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.681 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:49.681 01:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.053 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.619 00:21:51.619 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.619 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.619 01:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.877 { 00:21:51.877 "cntlid": 111, 00:21:51.877 "qid": 0, 00:21:51.877 "state": "enabled", 00:21:51.877 "thread": "nvmf_tgt_poll_group_000", 00:21:51.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.877 "listen_address": { 00:21:51.877 "trtype": "TCP", 00:21:51.877 "adrfam": "IPv4", 00:21:51.877 "traddr": "10.0.0.2", 00:21:51.877 "trsvcid": "4420" 00:21:51.877 }, 00:21:51.877 "peer_address": { 00:21:51.877 "trtype": "TCP", 00:21:51.877 "adrfam": "IPv4", 00:21:51.877 "traddr": "10.0.0.1", 00:21:51.877 "trsvcid": "41974" 00:21:51.877 }, 00:21:51.877 "auth": { 00:21:51.877 "state": "completed", 00:21:51.877 "digest": "sha512", 00:21:51.877 "dhgroup": "ffdhe2048" 00:21:51.877 } 00:21:51.877 } 00:21:51.877 ]' 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.877 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.134 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:52.134 01:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:21:53.067 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.067 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.067 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.067 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.067 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.067 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.067 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.067 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.067 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.325 01:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.890 00:21:53.890 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.890 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.890 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.148 { 00:21:54.148 "cntlid": 113, 00:21:54.148 "qid": 0, 00:21:54.148 "state": "enabled", 00:21:54.148 "thread": "nvmf_tgt_poll_group_000", 00:21:54.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:54.148 "listen_address": { 00:21:54.148 "trtype": "TCP", 00:21:54.148 "adrfam": "IPv4", 00:21:54.148 "traddr": "10.0.0.2", 00:21:54.148 "trsvcid": "4420" 00:21:54.148 }, 00:21:54.148 "peer_address": { 00:21:54.148 "trtype": "TCP", 00:21:54.148 "adrfam": "IPv4", 00:21:54.148 "traddr": "10.0.0.1", 00:21:54.148 "trsvcid": "52704" 00:21:54.148 }, 00:21:54.148 "auth": { 00:21:54.148 "state": "completed", 00:21:54.148 "digest": "sha512", 00:21:54.148 "dhgroup": "ffdhe3072" 00:21:54.148 } 00:21:54.148 } 00:21:54.148 ]' 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.148 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.407 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:54.407 01:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:21:55.340 01:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.340 01:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.340 01:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.340 01:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.340 01:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.340 01:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.341 01:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.341 01:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.906 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.164 00:21:56.164 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.164 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.164 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.423 { 00:21:56.423 "cntlid": 115, 00:21:56.423 "qid": 0, 00:21:56.423 "state": "enabled", 00:21:56.423 "thread": "nvmf_tgt_poll_group_000", 00:21:56.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.423 "listen_address": { 00:21:56.423 "trtype": "TCP", 00:21:56.423 "adrfam": "IPv4", 00:21:56.423 "traddr": "10.0.0.2", 00:21:56.423 "trsvcid": "4420" 00:21:56.423 }, 00:21:56.423 "peer_address": { 00:21:56.423 "trtype": "TCP", 00:21:56.423 "adrfam": "IPv4", 00:21:56.423 "traddr": "10.0.0.1", 00:21:56.423 "trsvcid": "52742" 00:21:56.423 }, 00:21:56.423 "auth": { 00:21:56.423 "state": "completed", 00:21:56.423 "digest": "sha512", 00:21:56.423 "dhgroup": "ffdhe3072" 00:21:56.423 } 00:21:56.423 } 00:21:56.423 ]' 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.423 01:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.681 01:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:56.681 01:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.054 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.055 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.055 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.055 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.055 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.055 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.055 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.620 00:21:58.620 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.620 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.620 01:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.878 { 00:21:58.878 "cntlid": 117, 00:21:58.878 "qid": 0, 00:21:58.878 "state": "enabled", 00:21:58.878 "thread": "nvmf_tgt_poll_group_000", 00:21:58.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.878 "listen_address": { 00:21:58.878 "trtype": "TCP", 00:21:58.878 "adrfam": "IPv4", 00:21:58.878 "traddr": "10.0.0.2", 00:21:58.878 "trsvcid": "4420" 00:21:58.878 }, 00:21:58.878 "peer_address": { 00:21:58.878 "trtype": "TCP", 00:21:58.878 "adrfam": "IPv4", 00:21:58.878 "traddr": "10.0.0.1", 00:21:58.878 "trsvcid": "52778" 00:21:58.878 }, 00:21:58.878 "auth": { 00:21:58.878 "state": "completed", 00:21:58.878 "digest": "sha512", 00:21:58.878 "dhgroup": "ffdhe3072" 00:21:58.878 } 00:21:58.878 } 00:21:58.878 ]' 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.878 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.135 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:21:59.135 01:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:22:00.069 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.327 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.327 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.327 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.327 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.327 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.327 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.327 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.585 01:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.842 00:22:00.842 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.842 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.842 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.100 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.100 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.100 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.100 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.100 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.100 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.100 { 00:22:01.100 "cntlid": 119, 00:22:01.100 "qid": 0, 00:22:01.100 "state": "enabled", 00:22:01.100 "thread": "nvmf_tgt_poll_group_000", 00:22:01.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:01.100 "listen_address": { 00:22:01.100 "trtype": "TCP", 00:22:01.100 "adrfam": "IPv4", 00:22:01.100 "traddr": "10.0.0.2", 00:22:01.100 "trsvcid": "4420" 00:22:01.100 }, 00:22:01.100 "peer_address": { 00:22:01.100 "trtype": "TCP", 00:22:01.100 "adrfam": "IPv4", 00:22:01.100 "traddr": "10.0.0.1", 00:22:01.100 "trsvcid": "52810" 00:22:01.101 }, 00:22:01.101 "auth": { 00:22:01.101 "state": "completed", 00:22:01.101 "digest": "sha512", 00:22:01.101 "dhgroup": "ffdhe3072" 00:22:01.101 } 00:22:01.101 } 00:22:01.101 ]' 00:22:01.101 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.359 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.359 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.359 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:01.359 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.359 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.359 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.359 01:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.616 01:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:01.616 01:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:02.555 01:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.555 01:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.555 01:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.555 01:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.555 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.555 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.555 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.555 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.555 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.820 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.385 00:22:03.385 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.385 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.385 01:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.644 { 00:22:03.644 "cntlid": 121, 00:22:03.644 "qid": 0, 00:22:03.644 "state": "enabled", 00:22:03.644 "thread": "nvmf_tgt_poll_group_000", 00:22:03.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.644 "listen_address": { 00:22:03.644 "trtype": "TCP", 00:22:03.644 "adrfam": "IPv4", 00:22:03.644 "traddr": "10.0.0.2", 00:22:03.644 "trsvcid": "4420" 00:22:03.644 }, 00:22:03.644 "peer_address": { 00:22:03.644 "trtype": "TCP", 00:22:03.644 "adrfam": "IPv4", 00:22:03.644 "traddr": "10.0.0.1", 00:22:03.644 "trsvcid": "52828" 00:22:03.644 }, 00:22:03.644 "auth": { 00:22:03.644 "state": "completed", 00:22:03.644 "digest": "sha512", 00:22:03.644 "dhgroup": "ffdhe4096" 00:22:03.644 } 00:22:03.644 } 00:22:03.644 ]' 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.644 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.645 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.645 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.903 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:22:03.903 01:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.277 01:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.843 00:22:05.843 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.843 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.843 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.843 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.843 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.100 { 00:22:06.100 "cntlid": 123, 00:22:06.100 "qid": 0, 00:22:06.100 "state": "enabled", 00:22:06.100 "thread": "nvmf_tgt_poll_group_000", 00:22:06.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.100 "listen_address": { 00:22:06.100 "trtype": "TCP", 00:22:06.100 "adrfam": "IPv4", 00:22:06.100 "traddr": "10.0.0.2", 00:22:06.100 "trsvcid": "4420" 00:22:06.100 }, 00:22:06.100 "peer_address": { 00:22:06.100 "trtype": "TCP", 00:22:06.100 "adrfam": "IPv4", 00:22:06.100 "traddr": "10.0.0.1", 00:22:06.100 "trsvcid": "56330" 00:22:06.100 }, 00:22:06.100 "auth": { 00:22:06.100 "state": "completed", 00:22:06.100 "digest": "sha512", 00:22:06.100 "dhgroup": "ffdhe4096" 00:22:06.100 } 00:22:06.100 } 00:22:06.100 ]' 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.100 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.358 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:22:06.358 01:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:22:07.291 01:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.291 01:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.291 01:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.291 01:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.291 01:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.291 01:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.291 01:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.291 01:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.859 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:08.154 00:22:08.154 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.154 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.154 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.441 { 00:22:08.441 "cntlid": 125, 00:22:08.441 "qid": 0, 00:22:08.441 "state": "enabled", 00:22:08.441 "thread": "nvmf_tgt_poll_group_000", 00:22:08.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.441 "listen_address": { 00:22:08.441 "trtype": "TCP", 00:22:08.441 "adrfam": "IPv4", 00:22:08.441 "traddr": "10.0.0.2", 00:22:08.441 "trsvcid": "4420" 00:22:08.441 }, 00:22:08.441 "peer_address": { 00:22:08.441 "trtype": "TCP", 00:22:08.441 "adrfam": "IPv4", 00:22:08.441 "traddr": "10.0.0.1", 00:22:08.441 "trsvcid": "56352" 00:22:08.441 }, 00:22:08.441 "auth": { 00:22:08.441 "state": "completed", 00:22:08.441 "digest": "sha512", 00:22:08.441 "dhgroup": "ffdhe4096" 00:22:08.441 } 00:22:08.441 } 00:22:08.441 ]' 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.441 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.442 01:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.701 01:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:22:08.701 01:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.075 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:10.642 00:22:10.642 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.642 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.642 01:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.900 { 00:22:10.900 "cntlid": 127, 00:22:10.900 "qid": 0, 00:22:10.900 "state": "enabled", 00:22:10.900 "thread": "nvmf_tgt_poll_group_000", 00:22:10.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.900 "listen_address": { 00:22:10.900 "trtype": "TCP", 00:22:10.900 "adrfam": "IPv4", 00:22:10.900 "traddr": "10.0.0.2", 00:22:10.900 "trsvcid": "4420" 00:22:10.900 }, 00:22:10.900 "peer_address": { 00:22:10.900 "trtype": "TCP", 00:22:10.900 "adrfam": "IPv4", 00:22:10.900 "traddr": "10.0.0.1", 00:22:10.900 "trsvcid": "56374" 00:22:10.900 }, 00:22:10.900 "auth": { 00:22:10.900 "state": "completed", 00:22:10.900 "digest": "sha512", 00:22:10.900 "dhgroup": "ffdhe4096" 00:22:10.900 } 00:22:10.900 } 00:22:10.900 ]' 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.900 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.919 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.177 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:11.177 01:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:12.111 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.111 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.111 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.111 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.111 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.111 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.111 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.111 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.111 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:12.676 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:12.676 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.676 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.676 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:12.676 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:12.677 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.677 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.677 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.677 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.677 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.677 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.677 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.677 01:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.242 00:22:13.242 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.242 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.242 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.242 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.242 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.242 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.242 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.500 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.500 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.500 { 00:22:13.500 "cntlid": 129, 00:22:13.500 "qid": 0, 00:22:13.501 "state": "enabled", 00:22:13.501 "thread": "nvmf_tgt_poll_group_000", 00:22:13.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.501 "listen_address": { 00:22:13.501 "trtype": "TCP", 00:22:13.501 "adrfam": "IPv4", 00:22:13.501 "traddr": "10.0.0.2", 00:22:13.501 "trsvcid": "4420" 00:22:13.501 }, 00:22:13.501 "peer_address": { 00:22:13.501 "trtype": "TCP", 00:22:13.501 "adrfam": "IPv4", 00:22:13.501 "traddr": "10.0.0.1", 00:22:13.501 "trsvcid": "56408" 00:22:13.501 }, 00:22:13.501 "auth": { 00:22:13.501 "state": "completed", 00:22:13.501 "digest": "sha512", 00:22:13.501 "dhgroup": "ffdhe6144" 00:22:13.501 } 00:22:13.501 } 00:22:13.501 ]' 00:22:13.501 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.501 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.501 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.501 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:13.501 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.501 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.501 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.501 01:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.759 01:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:22:13.759 01:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:22:14.693 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.693 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.693 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.693 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.693 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.693 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.693 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:14.693 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.259 01:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.825 00:22:15.825 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.825 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.825 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.084 { 00:22:16.084 "cntlid": 131, 00:22:16.084 "qid": 0, 00:22:16.084 "state": "enabled", 00:22:16.084 "thread": "nvmf_tgt_poll_group_000", 00:22:16.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.084 "listen_address": { 00:22:16.084 "trtype": "TCP", 00:22:16.084 "adrfam": "IPv4", 00:22:16.084 "traddr": "10.0.0.2", 00:22:16.084 "trsvcid": "4420" 00:22:16.084 }, 00:22:16.084 "peer_address": { 00:22:16.084 "trtype": "TCP", 00:22:16.084 "adrfam": "IPv4", 00:22:16.084 "traddr": "10.0.0.1", 00:22:16.084 "trsvcid": "50030" 00:22:16.084 }, 00:22:16.084 "auth": { 00:22:16.084 "state": "completed", 00:22:16.084 "digest": "sha512", 00:22:16.084 "dhgroup": "ffdhe6144" 00:22:16.084 } 00:22:16.084 } 00:22:16.084 ]' 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.084 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.342 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:22:16.342 01:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:22:17.277 01:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.277 01:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.277 01:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.277 01:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.277 01:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.277 01:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.277 01:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.277 01:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.535 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:18.470 00:22:18.470 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.470 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.470 01:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.470 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.470 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.470 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.470 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.470 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.470 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.470 { 00:22:18.470 "cntlid": 133, 00:22:18.470 "qid": 0, 00:22:18.470 "state": "enabled", 00:22:18.470 "thread": "nvmf_tgt_poll_group_000", 00:22:18.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.470 "listen_address": { 00:22:18.470 "trtype": "TCP", 00:22:18.470 "adrfam": "IPv4", 00:22:18.470 "traddr": "10.0.0.2", 00:22:18.470 "trsvcid": "4420" 00:22:18.470 }, 00:22:18.470 "peer_address": { 00:22:18.470 "trtype": "TCP", 00:22:18.470 "adrfam": "IPv4", 00:22:18.470 "traddr": "10.0.0.1", 00:22:18.470 "trsvcid": "50058" 00:22:18.470 }, 00:22:18.470 "auth": { 00:22:18.470 "state": "completed", 00:22:18.470 "digest": "sha512", 00:22:18.470 "dhgroup": "ffdhe6144" 00:22:18.470 } 00:22:18.470 } 00:22:18.470 ]' 00:22:18.470 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.728 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.728 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.728 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:18.728 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.728 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.728 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.728 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.987 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:22:18.987 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:22:19.921 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.921 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:19.921 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.921 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.921 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.921 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.921 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:19.921 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:20.179 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:20.179 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.179 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.179 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:20.179 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:20.179 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.180 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:20.180 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.180 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.180 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.180 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:20.180 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.180 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.852 00:22:20.852 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.852 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.852 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.109 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.109 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.109 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.109 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.109 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.109 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.109 { 00:22:21.109 "cntlid": 135, 00:22:21.109 "qid": 0, 00:22:21.109 "state": "enabled", 00:22:21.109 "thread": "nvmf_tgt_poll_group_000", 00:22:21.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.109 "listen_address": { 00:22:21.109 "trtype": "TCP", 00:22:21.109 "adrfam": "IPv4", 00:22:21.109 "traddr": "10.0.0.2", 00:22:21.109 "trsvcid": "4420" 00:22:21.109 }, 00:22:21.109 "peer_address": { 00:22:21.109 "trtype": "TCP", 00:22:21.109 "adrfam": "IPv4", 00:22:21.110 "traddr": "10.0.0.1", 00:22:21.110 "trsvcid": "50094" 00:22:21.110 }, 00:22:21.110 "auth": { 00:22:21.110 "state": "completed", 00:22:21.110 "digest": "sha512", 00:22:21.110 "dhgroup": "ffdhe6144" 00:22:21.110 } 00:22:21.110 } 00:22:21.110 ]' 00:22:21.110 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.110 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.110 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.367 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:21.367 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.367 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.367 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.367 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.625 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:21.625 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:22.558 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.558 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.558 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.558 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.558 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.558 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.558 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.558 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.558 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:22.814 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:22.814 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.814 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.815 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:23.746 00:22:23.746 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.746 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.746 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.003 { 00:22:24.003 "cntlid": 137, 00:22:24.003 "qid": 0, 00:22:24.003 "state": "enabled", 00:22:24.003 "thread": "nvmf_tgt_poll_group_000", 00:22:24.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:24.003 "listen_address": { 00:22:24.003 "trtype": "TCP", 00:22:24.003 "adrfam": "IPv4", 00:22:24.003 "traddr": "10.0.0.2", 00:22:24.003 "trsvcid": "4420" 00:22:24.003 }, 00:22:24.003 "peer_address": { 00:22:24.003 "trtype": "TCP", 00:22:24.003 "adrfam": "IPv4", 00:22:24.003 "traddr": "10.0.0.1", 00:22:24.003 "trsvcid": "50116" 00:22:24.003 }, 00:22:24.003 "auth": { 00:22:24.003 "state": "completed", 00:22:24.003 "digest": "sha512", 00:22:24.003 "dhgroup": "ffdhe8192" 00:22:24.003 } 00:22:24.003 } 00:22:24.003 ]' 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:24.003 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.261 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.261 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.261 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.519 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:22:24.519 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:22:25.452 01:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.452 01:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.452 01:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.452 01:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.452 01:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.452 01:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.452 01:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.452 01:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:25.710 01:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.642 00:22:26.642 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.642 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.642 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.900 { 00:22:26.900 "cntlid": 139, 00:22:26.900 "qid": 0, 00:22:26.900 "state": "enabled", 00:22:26.900 "thread": "nvmf_tgt_poll_group_000", 00:22:26.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.900 "listen_address": { 00:22:26.900 "trtype": "TCP", 00:22:26.900 "adrfam": "IPv4", 00:22:26.900 "traddr": "10.0.0.2", 00:22:26.900 "trsvcid": "4420" 00:22:26.900 }, 00:22:26.900 "peer_address": { 00:22:26.900 "trtype": "TCP", 00:22:26.900 "adrfam": "IPv4", 00:22:26.900 "traddr": "10.0.0.1", 00:22:26.900 "trsvcid": "42216" 00:22:26.900 }, 00:22:26.900 "auth": { 00:22:26.900 "state": "completed", 00:22:26.900 "digest": "sha512", 00:22:26.900 "dhgroup": "ffdhe8192" 00:22:26.900 } 00:22:26.900 } 00:22:26.900 ]' 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:26.900 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.157 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.157 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.158 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.415 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:22:27.415 01:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: --dhchap-ctrl-secret DHHC-1:02:NDI5MDU1MGY5YjY3MTA1MGI4ZTFhMTU4MWM1NDcyMWExMzE1YTM3ZDFjODI2NjhhpGoTpQ==: 00:22:28.348 01:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.348 01:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.348 01:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.348 01:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.348 01:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.348 01:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.348 01:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.348 01:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.606 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.538 00:22:29.538 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.538 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.538 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.796 { 00:22:29.796 "cntlid": 141, 00:22:29.796 "qid": 0, 00:22:29.796 "state": "enabled", 00:22:29.796 "thread": "nvmf_tgt_poll_group_000", 00:22:29.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:29.796 "listen_address": { 00:22:29.796 "trtype": "TCP", 00:22:29.796 "adrfam": "IPv4", 00:22:29.796 "traddr": "10.0.0.2", 00:22:29.796 "trsvcid": "4420" 00:22:29.796 }, 00:22:29.796 "peer_address": { 00:22:29.796 "trtype": "TCP", 00:22:29.796 "adrfam": "IPv4", 00:22:29.796 "traddr": "10.0.0.1", 00:22:29.796 "trsvcid": "42238" 00:22:29.796 }, 00:22:29.796 "auth": { 00:22:29.796 "state": "completed", 00:22:29.796 "digest": "sha512", 00:22:29.796 "dhgroup": "ffdhe8192" 00:22:29.796 } 00:22:29.796 } 00:22:29.796 ]' 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:29.796 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.053 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.054 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.054 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.311 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:22:30.311 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:01:MmZiMDk3ZjEyNDFjY2RmOGUyNmQzZjZmYzFlYzVhYTeWQKIn: 00:22:31.244 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.244 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:31.244 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.244 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.244 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.244 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.244 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.244 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:31.501 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:31.502 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.434 00:22:32.434 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.434 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.434 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.691 { 00:22:32.691 "cntlid": 143, 00:22:32.691 "qid": 0, 00:22:32.691 "state": "enabled", 00:22:32.691 "thread": "nvmf_tgt_poll_group_000", 00:22:32.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.691 "listen_address": { 00:22:32.691 "trtype": "TCP", 00:22:32.691 "adrfam": "IPv4", 00:22:32.691 "traddr": "10.0.0.2", 00:22:32.691 "trsvcid": "4420" 00:22:32.691 }, 00:22:32.691 "peer_address": { 00:22:32.691 "trtype": "TCP", 00:22:32.691 "adrfam": "IPv4", 00:22:32.691 "traddr": "10.0.0.1", 00:22:32.691 "trsvcid": "42264" 00:22:32.691 }, 00:22:32.691 "auth": { 00:22:32.691 "state": "completed", 00:22:32.691 "digest": "sha512", 00:22:32.691 "dhgroup": "ffdhe8192" 00:22:32.691 } 00:22:32.691 } 00:22:32.691 ]' 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.691 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.255 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:33.255 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.188 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.447 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.379 00:22:35.379 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.379 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.379 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.637 { 00:22:35.637 "cntlid": 145, 00:22:35.637 "qid": 0, 00:22:35.637 "state": "enabled", 00:22:35.637 "thread": "nvmf_tgt_poll_group_000", 00:22:35.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:35.637 "listen_address": { 00:22:35.637 "trtype": "TCP", 00:22:35.637 "adrfam": "IPv4", 00:22:35.637 "traddr": "10.0.0.2", 00:22:35.637 "trsvcid": "4420" 00:22:35.637 }, 00:22:35.637 "peer_address": { 00:22:35.637 "trtype": "TCP", 00:22:35.637 "adrfam": "IPv4", 00:22:35.637 "traddr": "10.0.0.1", 00:22:35.637 "trsvcid": "58824" 00:22:35.637 }, 00:22:35.637 "auth": { 00:22:35.637 "state": "completed", 00:22:35.637 "digest": "sha512", 00:22:35.637 "dhgroup": "ffdhe8192" 00:22:35.637 } 00:22:35.637 } 00:22:35.637 ]' 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.637 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.894 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:22:35.895 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZmNhYTM3OWM1YTRhNjI3MzI5MjFkNTcwMWVmMDVlMzgxMzY2MGI5OTQ0ZjU2YjhjmFXGyA==: --dhchap-ctrl-secret DHHC-1:03:ODg0NzdhMzA1MzVjYjk3MWM4ZGQxZmEzNDY5NTZiYTcwMzc0MjBmOTUxNTJiY2I1ODczYjg3NzdiYjBmYThkYzR24vo=: 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:37.266 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:37.831 request: 00:22:37.831 { 00:22:37.831 "name": "nvme0", 00:22:37.831 "trtype": "tcp", 00:22:37.831 "traddr": "10.0.0.2", 00:22:37.831 "adrfam": "ipv4", 00:22:37.831 "trsvcid": "4420", 00:22:37.831 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:37.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:37.831 "prchk_reftag": false, 00:22:37.831 "prchk_guard": false, 00:22:37.831 "hdgst": false, 00:22:37.831 "ddgst": false, 00:22:37.831 "dhchap_key": "key2", 00:22:37.831 "allow_unrecognized_csi": false, 00:22:37.831 "method": "bdev_nvme_attach_controller", 00:22:37.831 "req_id": 1 00:22:37.831 } 00:22:37.831 Got JSON-RPC error response 00:22:37.831 response: 00:22:37.831 { 00:22:37.831 "code": -5, 00:22:37.831 "message": "Input/output error" 00:22:37.831 } 00:22:37.831 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:37.831 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:37.831 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:37.832 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:38.765 request: 00:22:38.765 { 00:22:38.765 "name": "nvme0", 00:22:38.765 "trtype": "tcp", 00:22:38.765 "traddr": "10.0.0.2", 00:22:38.765 "adrfam": "ipv4", 00:22:38.765 "trsvcid": "4420", 00:22:38.765 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:38.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:38.765 "prchk_reftag": false, 00:22:38.765 "prchk_guard": false, 00:22:38.765 "hdgst": false, 00:22:38.765 "ddgst": false, 00:22:38.765 "dhchap_key": "key1", 00:22:38.765 "dhchap_ctrlr_key": "ckey2", 00:22:38.765 "allow_unrecognized_csi": false, 00:22:38.765 "method": "bdev_nvme_attach_controller", 00:22:38.765 "req_id": 1 00:22:38.765 } 00:22:38.765 Got JSON-RPC error response 00:22:38.765 response: 00:22:38.765 { 00:22:38.765 "code": -5, 00:22:38.765 "message": "Input/output error" 00:22:38.765 } 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:38.765 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.841 request: 00:22:39.841 { 00:22:39.841 "name": "nvme0", 00:22:39.841 "trtype": "tcp", 00:22:39.841 "traddr": "10.0.0.2", 00:22:39.841 "adrfam": "ipv4", 00:22:39.841 "trsvcid": "4420", 00:22:39.841 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:39.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:39.841 "prchk_reftag": false, 00:22:39.841 "prchk_guard": false, 00:22:39.841 "hdgst": false, 00:22:39.841 "ddgst": false, 00:22:39.841 "dhchap_key": "key1", 00:22:39.841 "dhchap_ctrlr_key": "ckey1", 00:22:39.841 "allow_unrecognized_csi": false, 00:22:39.841 "method": "bdev_nvme_attach_controller", 00:22:39.841 "req_id": 1 00:22:39.841 } 00:22:39.841 Got JSON-RPC error response 00:22:39.841 response: 00:22:39.841 { 00:22:39.841 "code": -5, 00:22:39.841 "message": "Input/output error" 00:22:39.841 } 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1603441 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1603441 ']' 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1603441 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1603441 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1603441' 00:22:39.841 killing process with pid 1603441 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1603441 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1603441 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1626783 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1626783 00:22:39.841 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1626783 ']' 00:22:39.842 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.842 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.842 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.842 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.842 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1626783 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1626783 ']' 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:40.100 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.666 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.666 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:22:40.666 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:40.666 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.666 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.666 null0 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qh6 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.bwP ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bwP 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.yL6 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.IAk ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IAk 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Uyc 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.qJK ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qJK 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.smF 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.666 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.667 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.567 nvme0n1 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.567 { 00:22:42.567 "cntlid": 1, 00:22:42.567 "qid": 0, 00:22:42.567 "state": "enabled", 00:22:42.567 "thread": "nvmf_tgt_poll_group_000", 00:22:42.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:42.567 "listen_address": { 00:22:42.567 "trtype": "TCP", 00:22:42.567 "adrfam": "IPv4", 00:22:42.567 "traddr": "10.0.0.2", 00:22:42.567 "trsvcid": "4420" 00:22:42.567 }, 00:22:42.567 "peer_address": { 00:22:42.567 "trtype": "TCP", 00:22:42.567 "adrfam": "IPv4", 00:22:42.567 "traddr": "10.0.0.1", 00:22:42.567 "trsvcid": "58884" 00:22:42.567 }, 00:22:42.567 "auth": { 00:22:42.567 "state": "completed", 00:22:42.567 "digest": "sha512", 00:22:42.567 "dhgroup": "ffdhe8192" 00:22:42.567 } 00:22:42.567 } 00:22:42.567 ]' 00:22:42.567 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.567 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.567 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.567 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:42.567 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.567 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.567 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.567 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.824 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:42.824 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:43.758 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.758 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.758 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.758 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.758 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.758 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:43.758 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.758 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.017 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.017 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:44.017 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:44.275 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:44.533 request: 00:22:44.533 { 00:22:44.533 "name": "nvme0", 00:22:44.533 "trtype": "tcp", 00:22:44.533 "traddr": "10.0.0.2", 00:22:44.533 "adrfam": "ipv4", 00:22:44.533 "trsvcid": "4420", 00:22:44.533 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:44.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:44.533 "prchk_reftag": false, 00:22:44.533 "prchk_guard": false, 00:22:44.533 "hdgst": false, 00:22:44.533 "ddgst": false, 00:22:44.533 "dhchap_key": "key3", 00:22:44.533 "allow_unrecognized_csi": false, 00:22:44.533 "method": "bdev_nvme_attach_controller", 00:22:44.533 "req_id": 1 00:22:44.533 } 00:22:44.533 Got JSON-RPC error response 00:22:44.533 response: 00:22:44.533 { 00:22:44.533 "code": -5, 00:22:44.533 "message": "Input/output error" 00:22:44.533 } 00:22:44.533 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:44.533 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:44.533 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:44.533 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:44.533 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:44.533 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:44.533 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:44.533 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:44.791 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:44.791 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:44.791 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:44.791 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:44.791 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.791 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:44.791 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:44.792 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:44.792 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:44.792 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:45.050 request: 00:22:45.050 { 00:22:45.050 "name": "nvme0", 00:22:45.050 "trtype": "tcp", 00:22:45.050 "traddr": "10.0.0.2", 00:22:45.050 "adrfam": "ipv4", 00:22:45.050 "trsvcid": "4420", 00:22:45.050 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:45.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:45.050 "prchk_reftag": false, 00:22:45.050 "prchk_guard": false, 00:22:45.050 "hdgst": false, 00:22:45.050 "ddgst": false, 00:22:45.050 "dhchap_key": "key3", 00:22:45.050 "allow_unrecognized_csi": false, 00:22:45.050 "method": "bdev_nvme_attach_controller", 00:22:45.050 "req_id": 1 00:22:45.050 } 00:22:45.050 Got JSON-RPC error response 00:22:45.050 response: 00:22:45.050 { 00:22:45.050 "code": -5, 00:22:45.050 "message": "Input/output error" 00:22:45.050 } 00:22:45.050 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:45.050 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.050 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.050 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.050 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:45.050 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:45.050 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:45.051 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.051 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.051 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.309 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:45.875 request: 00:22:45.875 { 00:22:45.875 "name": "nvme0", 00:22:45.875 "trtype": "tcp", 00:22:45.875 "traddr": "10.0.0.2", 00:22:45.875 "adrfam": "ipv4", 00:22:45.875 "trsvcid": "4420", 00:22:45.875 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:45.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:45.875 "prchk_reftag": false, 00:22:45.875 "prchk_guard": false, 00:22:45.875 "hdgst": false, 00:22:45.875 "ddgst": false, 00:22:45.875 "dhchap_key": "key0", 00:22:45.875 "dhchap_ctrlr_key": "key1", 00:22:45.875 "allow_unrecognized_csi": false, 00:22:45.875 "method": "bdev_nvme_attach_controller", 00:22:45.875 "req_id": 1 00:22:45.875 } 00:22:45.875 Got JSON-RPC error response 00:22:45.875 response: 00:22:45.875 { 00:22:45.875 "code": -5, 00:22:45.875 "message": "Input/output error" 00:22:45.875 } 00:22:45.875 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:45.875 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:45.875 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:45.875 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:45.875 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:45.875 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:45.875 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:46.134 nvme0n1 00:22:46.134 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:46.134 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:46.134 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.698 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.698 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.698 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.698 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:46.698 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.698 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.698 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.698 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:46.698 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:46.698 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:48.596 nvme0n1 00:22:48.596 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:48.596 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:48.596 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.596 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.596 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:48.596 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.596 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.596 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.596 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:48.596 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:48.596 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.854 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.854 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:48.854 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: --dhchap-ctrl-secret DHHC-1:03:MTM2NWFhMjZiNzNjNzBlNzk3YTc0ZDM5ZmFkZmRlNGJkMWY5ZjA2MThlYWE5NzU1YmU2MzMyMGM5OTQ0YTRmNk24rvE=: 00:22:49.788 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:49.788 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:49.788 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:49.788 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:49.788 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:49.788 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:49.788 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:49.788 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.788 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.046 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:50.047 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:22:50.047 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:50.047 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:22:50.047 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.047 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:22:50.047 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:50.047 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:50.047 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:50.047 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:50.980 request: 00:22:50.980 { 00:22:50.980 "name": "nvme0", 00:22:50.980 "trtype": "tcp", 00:22:50.980 "traddr": "10.0.0.2", 00:22:50.980 "adrfam": "ipv4", 00:22:50.980 "trsvcid": "4420", 00:22:50.980 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:50.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:50.980 "prchk_reftag": false, 00:22:50.980 "prchk_guard": false, 00:22:50.980 "hdgst": false, 00:22:50.980 "ddgst": false, 00:22:50.980 "dhchap_key": "key1", 00:22:50.980 "allow_unrecognized_csi": false, 00:22:50.980 "method": "bdev_nvme_attach_controller", 00:22:50.980 "req_id": 1 00:22:50.980 } 00:22:50.980 Got JSON-RPC error response 00:22:50.980 response: 00:22:50.980 { 00:22:50.980 "code": -5, 00:22:50.980 "message": "Input/output error" 00:22:50.980 } 00:22:50.980 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:22:50.980 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:50.980 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:50.980 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:50.980 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:50.980 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:50.980 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:52.354 nvme0n1 00:22:52.354 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:52.354 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:52.355 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.612 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.612 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.612 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.177 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.177 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.177 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.177 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.177 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:53.177 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:53.177 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:53.435 nvme0n1 00:22:53.435 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:53.435 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:53.435 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.693 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.693 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.693 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: '' 2s 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: ]] 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Mjg3MmE0NWY3N2E5MDlhYzUxZWViNzllNGJiZjgzOTgCs8lt: 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:53.951 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: 2s 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: ]] 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MGM1MjgzMzQwMWYzNGNiMzA4MDMwMmNkZTdlMjU4NDg1OGI5Yjg4NDhjNzNkOTI2iWekFw==: 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:55.852 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:58.383 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:58.383 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:58.384 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:59.760 nvme0n1 00:22:59.761 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:59.761 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.761 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.761 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.761 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:59.761 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:00.328 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:00.328 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:00.328 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.587 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.587 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:00.587 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.587 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.845 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.845 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:00.845 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:01.103 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:01.103 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:01.103 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:01.362 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:02.296 request: 00:23:02.296 { 00:23:02.296 "name": "nvme0", 00:23:02.296 "dhchap_key": "key1", 00:23:02.296 "dhchap_ctrlr_key": "key3", 00:23:02.296 "method": "bdev_nvme_set_keys", 00:23:02.296 "req_id": 1 00:23:02.296 } 00:23:02.296 Got JSON-RPC error response 00:23:02.296 response: 00:23:02.296 { 00:23:02.296 "code": -13, 00:23:02.296 "message": "Permission denied" 00:23:02.296 } 00:23:02.296 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:02.296 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:02.296 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:02.296 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:02.296 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:02.296 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.296 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:02.553 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:02.553 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:03.485 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:03.485 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:03.485 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.743 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:03.743 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:03.743 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.743 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.743 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.743 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:03.743 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:03.743 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:05.653 nvme0n1 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:05.653 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:06.324 request: 00:23:06.324 { 00:23:06.324 "name": "nvme0", 00:23:06.324 "dhchap_key": "key2", 00:23:06.324 "dhchap_ctrlr_key": "key0", 00:23:06.324 "method": "bdev_nvme_set_keys", 00:23:06.324 "req_id": 1 00:23:06.324 } 00:23:06.324 Got JSON-RPC error response 00:23:06.324 response: 00:23:06.324 { 00:23:06.324 "code": -13, 00:23:06.324 "message": "Permission denied" 00:23:06.324 } 00:23:06.324 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:23:06.324 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:06.324 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:06.324 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:06.324 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:06.324 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:06.324 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.582 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:06.582 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:07.516 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:07.516 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:07.516 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.775 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:07.775 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:08.710 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:08.710 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:08.710 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.968 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:08.968 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:08.968 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:08.968 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1603581 00:23:08.968 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1603581 ']' 00:23:08.969 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1603581 00:23:08.969 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:09.228 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.228 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1603581 00:23:09.228 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:09.228 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:09.228 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1603581' 00:23:09.228 killing process with pid 1603581 00:23:09.228 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1603581 00:23:09.228 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1603581 00:23:09.486 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:09.486 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:09.486 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:09.486 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:09.486 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:09.486 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:09.486 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:09.486 rmmod nvme_tcp 00:23:09.486 rmmod nvme_fabrics 00:23:09.486 rmmod nvme_keyring 00:23:09.486 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.486 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:09.486 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:09.486 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1626783 ']' 00:23:09.486 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1626783 00:23:09.486 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1626783 ']' 00:23:09.486 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1626783 00:23:09.486 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:23:09.486 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.487 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1626783 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1626783' 00:23:09.746 killing process with pid 1626783 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1626783 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1626783 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.746 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.qh6 /tmp/spdk.key-sha256.yL6 /tmp/spdk.key-sha384.Uyc /tmp/spdk.key-sha512.smF /tmp/spdk.key-sha512.bwP /tmp/spdk.key-sha384.IAk /tmp/spdk.key-sha256.qJK '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:12.283 00:23:12.283 real 3m40.141s 00:23:12.283 user 8m34.477s 00:23:12.283 sys 0m27.194s 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.283 ************************************ 00:23:12.283 END TEST nvmf_auth_target 00:23:12.283 ************************************ 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:12.283 ************************************ 00:23:12.283 START TEST nvmf_bdevio_no_huge 00:23:12.283 ************************************ 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:12.283 * Looking for test storage... 00:23:12.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:12.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.283 --rc genhtml_branch_coverage=1 00:23:12.283 --rc genhtml_function_coverage=1 00:23:12.283 --rc genhtml_legend=1 00:23:12.283 --rc geninfo_all_blocks=1 00:23:12.283 --rc geninfo_unexecuted_blocks=1 00:23:12.283 00:23:12.283 ' 00:23:12.283 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:12.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.283 --rc genhtml_branch_coverage=1 00:23:12.283 --rc genhtml_function_coverage=1 00:23:12.284 --rc genhtml_legend=1 00:23:12.284 --rc geninfo_all_blocks=1 00:23:12.284 --rc geninfo_unexecuted_blocks=1 00:23:12.284 00:23:12.284 ' 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:12.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.284 --rc genhtml_branch_coverage=1 00:23:12.284 --rc genhtml_function_coverage=1 00:23:12.284 --rc genhtml_legend=1 00:23:12.284 --rc geninfo_all_blocks=1 00:23:12.284 --rc geninfo_unexecuted_blocks=1 00:23:12.284 00:23:12.284 ' 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:12.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.284 --rc genhtml_branch_coverage=1 00:23:12.284 --rc genhtml_function_coverage=1 00:23:12.284 --rc genhtml_legend=1 00:23:12.284 --rc geninfo_all_blocks=1 00:23:12.284 --rc geninfo_unexecuted_blocks=1 00:23:12.284 00:23:12.284 ' 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:12.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:12.284 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:14.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.208 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:14.209 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:14.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:14.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:14.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:23:14.209 00:23:14.209 --- 10.0.0.2 ping statistics --- 00:23:14.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.209 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:23:14.209 00:23:14.209 --- 10.0.0.1 ping statistics --- 00:23:14.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.209 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1632216 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1632216 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1632216 ']' 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.209 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.468 [2024-10-13 01:33:59.809216] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:14.468 [2024-10-13 01:33:59.809311] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:14.468 [2024-10-13 01:33:59.878039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.468 [2024-10-13 01:33:59.922562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.468 [2024-10-13 01:33:59.922626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.468 [2024-10-13 01:33:59.922656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.468 [2024-10-13 01:33:59.922668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.468 [2024-10-13 01:33:59.922678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.468 [2024-10-13 01:33:59.923671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:14.468 [2024-10-13 01:33:59.923749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:14.468 [2024-10-13 01:33:59.923751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.468 [2024-10-13 01:33:59.923721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:14.468 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.468 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:23:14.468 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:14.468 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.468 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.726 [2024-10-13 01:34:00.067340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.726 Malloc0 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:14.726 [2024-10-13 01:34:00.105712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:14.726 { 00:23:14.726 "params": { 00:23:14.726 "name": "Nvme$subsystem", 00:23:14.726 "trtype": "$TEST_TRANSPORT", 00:23:14.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:14.726 "adrfam": "ipv4", 00:23:14.726 "trsvcid": "$NVMF_PORT", 00:23:14.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:14.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:14.726 "hdgst": ${hdgst:-false}, 00:23:14.726 "ddgst": ${ddgst:-false} 00:23:14.726 }, 00:23:14.726 "method": "bdev_nvme_attach_controller" 00:23:14.726 } 00:23:14.726 EOF 00:23:14.726 )") 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:23:14.726 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:14.726 "params": { 00:23:14.726 "name": "Nvme1", 00:23:14.726 "trtype": "tcp", 00:23:14.726 "traddr": "10.0.0.2", 00:23:14.726 "adrfam": "ipv4", 00:23:14.726 "trsvcid": "4420", 00:23:14.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.726 "hdgst": false, 00:23:14.726 "ddgst": false 00:23:14.726 }, 00:23:14.726 "method": "bdev_nvme_attach_controller" 00:23:14.726 }' 00:23:14.726 [2024-10-13 01:34:00.150979] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:14.726 [2024-10-13 01:34:00.151055] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1632326 ] 00:23:14.726 [2024-10-13 01:34:00.210386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:14.726 [2024-10-13 01:34:00.259414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.726 [2024-10-13 01:34:00.259463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.727 [2024-10-13 01:34:00.259467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.984 I/O targets: 00:23:14.984 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:14.984 00:23:14.984 00:23:14.984 CUnit - A unit testing framework for C - Version 2.1-3 00:23:14.984 http://cunit.sourceforge.net/ 00:23:14.984 00:23:14.984 00:23:14.984 Suite: bdevio tests on: Nvme1n1 00:23:14.984 Test: blockdev write read block ...passed 00:23:15.242 Test: blockdev write zeroes read block ...passed 00:23:15.242 Test: blockdev write zeroes read no split ...passed 00:23:15.242 Test: blockdev write zeroes read split ...passed 00:23:15.242 Test: blockdev write zeroes read split partial ...passed 00:23:15.242 Test: blockdev reset ...[2024-10-13 01:34:00.643875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:15.242 [2024-10-13 01:34:00.643990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfa7a0 (9): Bad file descriptor 00:23:15.242 [2024-10-13 01:34:00.699801] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:15.242 passed 00:23:15.242 Test: blockdev write read 8 blocks ...passed 00:23:15.242 Test: blockdev write read size > 128k ...passed 00:23:15.242 Test: blockdev write read invalid size ...passed 00:23:15.242 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:15.242 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:15.242 Test: blockdev write read max offset ...passed 00:23:15.500 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:15.500 Test: blockdev writev readv 8 blocks ...passed 00:23:15.500 Test: blockdev writev readv 30 x 1block ...passed 00:23:15.500 Test: blockdev writev readv block ...passed 00:23:15.500 Test: blockdev writev readv size > 128k ...passed 00:23:15.500 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:15.500 Test: blockdev comparev and writev ...[2024-10-13 01:34:01.032805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.500 [2024-10-13 01:34:01.032842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.500 [2024-10-13 01:34:01.032867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.500 [2024-10-13 01:34:01.032884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.500 [2024-10-13 01:34:01.033203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.500 [2024-10-13 01:34:01.033228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:15.500 [2024-10-13 01:34:01.033250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.500 [2024-10-13 01:34:01.033266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:15.500 [2024-10-13 01:34:01.033586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.500 [2024-10-13 01:34:01.033611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:15.500 [2024-10-13 01:34:01.033643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.500 [2024-10-13 01:34:01.033660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:15.500 [2024-10-13 01:34:01.033969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.500 [2024-10-13 01:34:01.033993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:15.500 [2024-10-13 01:34:01.034014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:15.500 [2024-10-13 01:34:01.034029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:15.500 passed 00:23:15.758 Test: blockdev nvme passthru rw ...passed 00:23:15.758 Test: blockdev nvme passthru vendor specific ...[2024-10-13 01:34:01.115714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.758 [2024-10-13 01:34:01.115743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:15.758 [2024-10-13 01:34:01.115889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.758 [2024-10-13 01:34:01.115913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:15.758 [2024-10-13 01:34:01.116061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.758 [2024-10-13 01:34:01.116083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:15.758 [2024-10-13 01:34:01.116228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:15.758 [2024-10-13 01:34:01.116251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:15.758 passed 00:23:15.758 Test: blockdev nvme admin passthru ...passed 00:23:15.758 Test: blockdev copy ...passed 00:23:15.758 00:23:15.758 Run Summary: Type Total Ran Passed Failed Inactive 00:23:15.758 suites 1 1 n/a 0 0 00:23:15.758 tests 23 23 23 0 0 00:23:15.758 asserts 152 152 152 0 n/a 00:23:15.758 00:23:15.758 Elapsed time = 1.385 seconds 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:16.016 rmmod nvme_tcp 00:23:16.016 rmmod nvme_fabrics 00:23:16.016 rmmod nvme_keyring 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1632216 ']' 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1632216 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1632216 ']' 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1632216 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:16.016 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1632216 00:23:16.274 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:23:16.274 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:23:16.274 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1632216' 00:23:16.274 killing process with pid 1632216 00:23:16.274 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1632216 00:23:16.274 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1632216 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.532 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.432 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.432 00:23:18.432 real 0m6.626s 00:23:18.432 user 0m11.014s 00:23:18.432 sys 0m2.597s 00:23:18.432 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:18.432 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.432 ************************************ 00:23:18.432 END TEST nvmf_bdevio_no_huge 00:23:18.432 ************************************ 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:18.691 ************************************ 00:23:18.691 START TEST nvmf_tls 00:23:18.691 ************************************ 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:18.691 * Looking for test storage... 00:23:18.691 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:18.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.691 --rc genhtml_branch_coverage=1 00:23:18.691 --rc genhtml_function_coverage=1 00:23:18.691 --rc genhtml_legend=1 00:23:18.691 --rc geninfo_all_blocks=1 00:23:18.691 --rc geninfo_unexecuted_blocks=1 00:23:18.691 00:23:18.691 ' 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:18.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.691 --rc genhtml_branch_coverage=1 00:23:18.691 --rc genhtml_function_coverage=1 00:23:18.691 --rc genhtml_legend=1 00:23:18.691 --rc geninfo_all_blocks=1 00:23:18.691 --rc geninfo_unexecuted_blocks=1 00:23:18.691 00:23:18.691 ' 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:18.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.691 --rc genhtml_branch_coverage=1 00:23:18.691 --rc genhtml_function_coverage=1 00:23:18.691 --rc genhtml_legend=1 00:23:18.691 --rc geninfo_all_blocks=1 00:23:18.691 --rc geninfo_unexecuted_blocks=1 00:23:18.691 00:23:18.691 ' 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:18.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.691 --rc genhtml_branch_coverage=1 00:23:18.691 --rc genhtml_function_coverage=1 00:23:18.691 --rc genhtml_legend=1 00:23:18.691 --rc geninfo_all_blocks=1 00:23:18.691 --rc geninfo_unexecuted_blocks=1 00:23:18.691 00:23:18.691 ' 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.691 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.692 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.221 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:21.222 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:21.222 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:21.222 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:21.222 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:21.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:23:21.222 00:23:21.222 --- 10.0.0.2 ping statistics --- 00:23:21.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.222 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:21.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:23:21.222 00:23:21.222 --- 10.0.0.1 ping statistics --- 00:23:21.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.222 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1634402 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1634402 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1634402 ']' 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:21.222 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.222 [2024-10-13 01:34:06.580590] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:21.222 [2024-10-13 01:34:06.580674] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.222 [2024-10-13 01:34:06.646395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.222 [2024-10-13 01:34:06.693792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.222 [2024-10-13 01:34:06.693845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.222 [2024-10-13 01:34:06.693858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.222 [2024-10-13 01:34:06.693870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.222 [2024-10-13 01:34:06.693880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.222 [2024-10-13 01:34:06.694460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.481 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.481 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:21.481 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:21.481 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:21.481 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.481 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.481 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:21.481 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:21.739 true 00:23:21.739 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:21.739 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:22.074 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:22.074 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:22.074 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:22.332 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.332 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:22.589 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:22.589 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:22.589 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:22.846 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.846 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:23.104 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:23.104 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:23.104 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:23.104 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:23.363 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:23.363 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:23.363 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:23.621 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:23.621 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:23.879 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:23.879 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:23.879 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:24.137 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:24.137 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:23:24.395 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:24.653 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:24.654 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:24.654 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.hyfX8lPvXp 00:23:24.654 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:24.654 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.mvSeyxAdvl 00:23:24.654 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:24.654 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:24.654 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hyfX8lPvXp 00:23:24.654 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.mvSeyxAdvl 00:23:24.654 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:24.912 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:25.170 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.hyfX8lPvXp 00:23:25.170 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hyfX8lPvXp 00:23:25.170 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:25.428 [2024-10-13 01:34:10.955309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.428 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:25.687 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.946 [2024-10-13 01:34:11.508780] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.946 [2024-10-13 01:34:11.509033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.946 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:26.513 malloc0 00:23:26.513 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:26.771 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hyfX8lPvXp 00:23:27.030 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.288 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hyfX8lPvXp 00:23:37.260 Initializing NVMe Controllers 00:23:37.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:37.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:37.260 Initialization complete. Launching workers. 00:23:37.260 ======================================================== 00:23:37.260 Latency(us) 00:23:37.260 Device Information : IOPS MiB/s Average min max 00:23:37.260 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7626.69 29.79 8394.37 1223.88 11604.67 00:23:37.260 ======================================================== 00:23:37.260 Total : 7626.69 29.79 8394.37 1223.88 11604.67 00:23:37.260 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hyfX8lPvXp 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hyfX8lPvXp 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1636433 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1636433 /var/tmp/bdevperf.sock 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1636433 ']' 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.519 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.519 [2024-10-13 01:34:22.894073] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:37.519 [2024-10-13 01:34:22.894150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636433 ] 00:23:37.519 [2024-10-13 01:34:22.953279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.519 [2024-10-13 01:34:23.000757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.777 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:37.777 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:37.777 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hyfX8lPvXp 00:23:38.035 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:38.294 [2024-10-13 01:34:23.649140] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.294 TLSTESTn1 00:23:38.294 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:38.294 Running I/O for 10 seconds... 00:23:40.693 3504.00 IOPS, 13.69 MiB/s [2024-10-12T23:34:27.205Z] 3515.50 IOPS, 13.73 MiB/s [2024-10-12T23:34:28.155Z] 3527.67 IOPS, 13.78 MiB/s [2024-10-12T23:34:29.094Z] 3507.00 IOPS, 13.70 MiB/s [2024-10-12T23:34:30.029Z] 3531.60 IOPS, 13.80 MiB/s [2024-10-12T23:34:30.963Z] 3530.67 IOPS, 13.79 MiB/s [2024-10-12T23:34:31.897Z] 3532.86 IOPS, 13.80 MiB/s [2024-10-12T23:34:33.272Z] 3534.12 IOPS, 13.81 MiB/s [2024-10-12T23:34:34.206Z] 3537.89 IOPS, 13.82 MiB/s [2024-10-12T23:34:34.206Z] 3533.00 IOPS, 13.80 MiB/s 00:23:48.628 Latency(us) 00:23:48.628 [2024-10-12T23:34:34.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.629 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.629 Verification LBA range: start 0x0 length 0x2000 00:23:48.629 TLSTESTn1 : 10.02 3536.99 13.82 0.00 0.00 36123.04 6213.78 36505.98 00:23:48.629 [2024-10-12T23:34:34.207Z] =================================================================================================================== 00:23:48.629 [2024-10-12T23:34:34.207Z] Total : 3536.99 13.82 0.00 0.00 36123.04 6213.78 36505.98 00:23:48.629 { 00:23:48.629 "results": [ 00:23:48.629 { 00:23:48.629 "job": "TLSTESTn1", 00:23:48.629 "core_mask": "0x4", 00:23:48.629 "workload": "verify", 00:23:48.629 "status": "finished", 00:23:48.629 "verify_range": { 00:23:48.629 "start": 0, 00:23:48.629 "length": 8192 00:23:48.629 }, 00:23:48.629 "queue_depth": 128, 00:23:48.629 "io_size": 4096, 00:23:48.629 "runtime": 10.02462, 00:23:48.629 "iops": 3536.991925878487, 00:23:48.629 "mibps": 13.81637471046284, 00:23:48.629 "io_failed": 0, 00:23:48.629 "io_timeout": 0, 00:23:48.629 "avg_latency_us": 36123.03541880149, 00:23:48.629 "min_latency_us": 6213.783703703703, 00:23:48.629 "max_latency_us": 36505.97925925926 00:23:48.629 } 00:23:48.629 ], 00:23:48.629 "core_count": 1 00:23:48.629 } 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1636433 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1636433 ']' 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1636433 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1636433 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1636433' 00:23:48.629 killing process with pid 1636433 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1636433 00:23:48.629 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.629 00:23:48.629 Latency(us) 00:23:48.629 [2024-10-12T23:34:34.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.629 [2024-10-12T23:34:34.207Z] =================================================================================================================== 00:23:48.629 [2024-10-12T23:34:34.207Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.629 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1636433 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mvSeyxAdvl 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mvSeyxAdvl 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mvSeyxAdvl 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.mvSeyxAdvl 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1637649 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1637649 /var/tmp/bdevperf.sock 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1637649 ']' 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.629 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.888 [2024-10-13 01:34:34.216328] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:48.888 [2024-10-13 01:34:34.216421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637649 ] 00:23:48.888 [2024-10-13 01:34:34.282361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.888 [2024-10-13 01:34:34.334570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.888 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:48.888 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:48.888 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mvSeyxAdvl 00:23:49.454 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:49.712 [2024-10-13 01:34:35.038285] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.712 [2024-10-13 01:34:35.049403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:49.712 [2024-10-13 01:34:35.049447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f2430 (107): Transport endpoint is not connected 00:23:49.713 [2024-10-13 01:34:35.050439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f2430 (9): Bad file descriptor 00:23:49.713 [2024-10-13 01:34:35.051439] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:49.713 [2024-10-13 01:34:35.051482] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:49.713 [2024-10-13 01:34:35.051497] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:49.713 [2024-10-13 01:34:35.051540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:49.713 request: 00:23:49.713 { 00:23:49.713 "name": "TLSTEST", 00:23:49.713 "trtype": "tcp", 00:23:49.713 "traddr": "10.0.0.2", 00:23:49.713 "adrfam": "ipv4", 00:23:49.713 "trsvcid": "4420", 00:23:49.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.713 "prchk_reftag": false, 00:23:49.713 "prchk_guard": false, 00:23:49.713 "hdgst": false, 00:23:49.713 "ddgst": false, 00:23:49.713 "psk": "key0", 00:23:49.713 "allow_unrecognized_csi": false, 00:23:49.713 "method": "bdev_nvme_attach_controller", 00:23:49.713 "req_id": 1 00:23:49.713 } 00:23:49.713 Got JSON-RPC error response 00:23:49.713 response: 00:23:49.713 { 00:23:49.713 "code": -5, 00:23:49.713 "message": "Input/output error" 00:23:49.713 } 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1637649 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1637649 ']' 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1637649 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1637649 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1637649' 00:23:49.713 killing process with pid 1637649 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1637649 00:23:49.713 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.713 00:23:49.713 Latency(us) 00:23:49.713 [2024-10-12T23:34:35.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.713 [2024-10-12T23:34:35.291Z] =================================================================================================================== 00:23:49.713 [2024-10-12T23:34:35.291Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.713 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1637649 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hyfX8lPvXp 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hyfX8lPvXp 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hyfX8lPvXp 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hyfX8lPvXp 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1637789 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1637789 /var/tmp/bdevperf.sock 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1637789 ']' 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.971 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.971 [2024-10-13 01:34:35.359046] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:49.971 [2024-10-13 01:34:35.359125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637789 ] 00:23:49.971 [2024-10-13 01:34:35.418287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.971 [2024-10-13 01:34:35.463547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.229 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.229 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:50.229 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hyfX8lPvXp 00:23:50.487 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:50.745 [2024-10-13 01:34:36.111417] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.745 [2024-10-13 01:34:36.121988] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:50.745 [2024-10-13 01:34:36.122017] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:50.745 [2024-10-13 01:34:36.122067] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:50.745 [2024-10-13 01:34:36.122491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c53430 (107): Transport endpoint is not connected 00:23:50.745 [2024-10-13 01:34:36.123483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c53430 (9): Bad file descriptor 00:23:50.745 [2024-10-13 01:34:36.124483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:50.745 [2024-10-13 01:34:36.124503] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:50.745 [2024-10-13 01:34:36.124541] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:50.745 [2024-10-13 01:34:36.124559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:50.745 request: 00:23:50.745 { 00:23:50.745 "name": "TLSTEST", 00:23:50.745 "trtype": "tcp", 00:23:50.745 "traddr": "10.0.0.2", 00:23:50.745 "adrfam": "ipv4", 00:23:50.745 "trsvcid": "4420", 00:23:50.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.745 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:50.745 "prchk_reftag": false, 00:23:50.745 "prchk_guard": false, 00:23:50.745 "hdgst": false, 00:23:50.745 "ddgst": false, 00:23:50.745 "psk": "key0", 00:23:50.745 "allow_unrecognized_csi": false, 00:23:50.745 "method": "bdev_nvme_attach_controller", 00:23:50.745 "req_id": 1 00:23:50.745 } 00:23:50.745 Got JSON-RPC error response 00:23:50.745 response: 00:23:50.745 { 00:23:50.745 "code": -5, 00:23:50.745 "message": "Input/output error" 00:23:50.745 } 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1637789 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1637789 ']' 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1637789 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1637789 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1637789' 00:23:50.745 killing process with pid 1637789 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1637789 00:23:50.745 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.745 00:23:50.745 Latency(us) 00:23:50.745 [2024-10-12T23:34:36.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.745 [2024-10-12T23:34:36.323Z] =================================================================================================================== 00:23:50.745 [2024-10-12T23:34:36.323Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.745 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1637789 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hyfX8lPvXp 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hyfX8lPvXp 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hyfX8lPvXp 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hyfX8lPvXp 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1637924 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1637924 /var/tmp/bdevperf.sock 00:23:51.002 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1637924 ']' 00:23:51.003 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.003 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.003 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.003 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.003 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.003 [2024-10-13 01:34:36.406419] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:51.003 [2024-10-13 01:34:36.406524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1637924 ] 00:23:51.003 [2024-10-13 01:34:36.466613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.003 [2024-10-13 01:34:36.511345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.260 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.260 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:51.260 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hyfX8lPvXp 00:23:51.518 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.776 [2024-10-13 01:34:37.175916] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.776 [2024-10-13 01:34:37.184745] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:51.776 [2024-10-13 01:34:37.184791] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:51.776 [2024-10-13 01:34:37.184839] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:51.776 [2024-10-13 01:34:37.185158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9c430 (107): Transport endpoint is not connected 00:23:51.776 [2024-10-13 01:34:37.186148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9c430 (9): Bad file descriptor 00:23:51.776 [2024-10-13 01:34:37.187147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:51.776 [2024-10-13 01:34:37.187168] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:51.776 [2024-10-13 01:34:37.187196] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:51.776 [2024-10-13 01:34:37.187215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:51.776 request: 00:23:51.776 { 00:23:51.776 "name": "TLSTEST", 00:23:51.776 "trtype": "tcp", 00:23:51.776 "traddr": "10.0.0.2", 00:23:51.776 "adrfam": "ipv4", 00:23:51.776 "trsvcid": "4420", 00:23:51.776 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.776 "prchk_reftag": false, 00:23:51.776 "prchk_guard": false, 00:23:51.776 "hdgst": false, 00:23:51.776 "ddgst": false, 00:23:51.776 "psk": "key0", 00:23:51.776 "allow_unrecognized_csi": false, 00:23:51.776 "method": "bdev_nvme_attach_controller", 00:23:51.776 "req_id": 1 00:23:51.776 } 00:23:51.776 Got JSON-RPC error response 00:23:51.776 response: 00:23:51.776 { 00:23:51.776 "code": -5, 00:23:51.776 "message": "Input/output error" 00:23:51.776 } 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1637924 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1637924 ']' 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1637924 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1637924 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1637924' 00:23:51.776 killing process with pid 1637924 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1637924 00:23:51.776 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.776 00:23:51.776 Latency(us) 00:23:51.776 [2024-10-12T23:34:37.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.776 [2024-10-12T23:34:37.354Z] =================================================================================================================== 00:23:51.776 [2024-10-12T23:34:37.354Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:51.776 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1637924 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1638064 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1638064 /var/tmp/bdevperf.sock 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1638064 ']' 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.035 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.035 [2024-10-13 01:34:37.454604] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:52.035 [2024-10-13 01:34:37.454696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638064 ] 00:23:52.035 [2024-10-13 01:34:37.511648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.035 [2024-10-13 01:34:37.558053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.293 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.293 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:52.293 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:52.551 [2024-10-13 01:34:37.946111] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:52.551 [2024-10-13 01:34:37.946150] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:52.551 request: 00:23:52.551 { 00:23:52.551 "name": "key0", 00:23:52.551 "path": "", 00:23:52.551 "method": "keyring_file_add_key", 00:23:52.551 "req_id": 1 00:23:52.551 } 00:23:52.551 Got JSON-RPC error response 00:23:52.551 response: 00:23:52.551 { 00:23:52.551 "code": -1, 00:23:52.551 "message": "Operation not permitted" 00:23:52.551 } 00:23:52.551 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:52.808 [2024-10-13 01:34:38.214961] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.808 [2024-10-13 01:34:38.215024] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:52.808 request: 00:23:52.808 { 00:23:52.808 "name": "TLSTEST", 00:23:52.808 "trtype": "tcp", 00:23:52.808 "traddr": "10.0.0.2", 00:23:52.808 "adrfam": "ipv4", 00:23:52.808 "trsvcid": "4420", 00:23:52.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.808 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:52.808 "prchk_reftag": false, 00:23:52.808 "prchk_guard": false, 00:23:52.808 "hdgst": false, 00:23:52.808 "ddgst": false, 00:23:52.808 "psk": "key0", 00:23:52.808 "allow_unrecognized_csi": false, 00:23:52.808 "method": "bdev_nvme_attach_controller", 00:23:52.808 "req_id": 1 00:23:52.808 } 00:23:52.808 Got JSON-RPC error response 00:23:52.808 response: 00:23:52.808 { 00:23:52.808 "code": -126, 00:23:52.808 "message": "Required key not available" 00:23:52.808 } 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1638064 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1638064 ']' 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1638064 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1638064 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1638064' 00:23:52.808 killing process with pid 1638064 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1638064 00:23:52.808 Received shutdown signal, test time was about 10.000000 seconds 00:23:52.808 00:23:52.808 Latency(us) 00:23:52.808 [2024-10-12T23:34:38.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.808 [2024-10-12T23:34:38.386Z] =================================================================================================================== 00:23:52.808 [2024-10-12T23:34:38.386Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.808 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1638064 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1634402 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1634402 ']' 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1634402 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1634402 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1634402' 00:23:53.065 killing process with pid 1634402 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1634402 00:23:53.065 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1634402 00:23:53.323 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:53.323 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:53.323 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:23:53.323 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:23:53.323 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:53.323 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:23:53.323 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:23:53.323 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:53.323 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Gscv29RW8P 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Gscv29RW8P 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1638312 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1638312 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1638312 ']' 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:53.324 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.324 [2024-10-13 01:34:38.826812] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:53.324 [2024-10-13 01:34:38.826923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.324 [2024-10-13 01:34:38.894603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.582 [2024-10-13 01:34:38.941304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.582 [2024-10-13 01:34:38.941361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.582 [2024-10-13 01:34:38.941388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.582 [2024-10-13 01:34:38.941399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.582 [2024-10-13 01:34:38.941409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.582 [2024-10-13 01:34:38.942080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.582 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.582 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:53.582 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:53.582 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.582 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.582 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.582 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Gscv29RW8P 00:23:53.582 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Gscv29RW8P 00:23:53.582 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:53.840 [2024-10-13 01:34:39.326569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.840 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:54.098 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:54.357 [2024-10-13 01:34:39.860030] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.357 [2024-10-13 01:34:39.860306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.357 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:54.614 malloc0 00:23:54.614 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:54.871 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Gscv29RW8P 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Gscv29RW8P 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Gscv29RW8P 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1638521 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1638521 /var/tmp/bdevperf.sock 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1638521 ']' 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:55.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.437 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:55.695 [2024-10-13 01:34:41.026836] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:23:55.695 [2024-10-13 01:34:41.026944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1638521 ] 00:23:55.695 [2024-10-13 01:34:41.090043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.695 [2024-10-13 01:34:41.136763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.695 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:55.695 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:23:55.695 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Gscv29RW8P 00:23:56.260 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:56.261 [2024-10-13 01:34:41.780826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:56.519 TLSTESTn1 00:23:56.519 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:56.519 Running I/O for 10 seconds... 00:23:58.826 3393.00 IOPS, 13.25 MiB/s [2024-10-12T23:34:45.338Z] 3404.00 IOPS, 13.30 MiB/s [2024-10-12T23:34:46.272Z] 3425.67 IOPS, 13.38 MiB/s [2024-10-12T23:34:47.205Z] 3426.75 IOPS, 13.39 MiB/s [2024-10-12T23:34:48.138Z] 3438.60 IOPS, 13.43 MiB/s [2024-10-12T23:34:49.071Z] 3448.33 IOPS, 13.47 MiB/s [2024-10-12T23:34:50.004Z] 3439.00 IOPS, 13.43 MiB/s [2024-10-12T23:34:51.378Z] 3428.50 IOPS, 13.39 MiB/s [2024-10-12T23:34:52.360Z] 3426.33 IOPS, 13.38 MiB/s [2024-10-12T23:34:52.360Z] 3432.50 IOPS, 13.41 MiB/s 00:24:06.782 Latency(us) 00:24:06.782 [2024-10-12T23:34:52.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.782 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:06.782 Verification LBA range: start 0x0 length 0x2000 00:24:06.782 TLSTESTn1 : 10.03 3436.21 13.42 0.00 0.00 37182.27 9223.59 33204.91 00:24:06.782 [2024-10-12T23:34:52.360Z] =================================================================================================================== 00:24:06.782 [2024-10-12T23:34:52.360Z] Total : 3436.21 13.42 0.00 0.00 37182.27 9223.59 33204.91 00:24:06.782 { 00:24:06.782 "results": [ 00:24:06.782 { 00:24:06.782 "job": "TLSTESTn1", 00:24:06.782 "core_mask": "0x4", 00:24:06.782 "workload": "verify", 00:24:06.782 "status": "finished", 00:24:06.782 "verify_range": { 00:24:06.782 "start": 0, 00:24:06.782 "length": 8192 00:24:06.782 }, 00:24:06.782 "queue_depth": 128, 00:24:06.782 "io_size": 4096, 00:24:06.782 "runtime": 10.026177, 00:24:06.782 "iops": 3436.205046050952, 00:24:06.782 "mibps": 13.422675961136532, 00:24:06.782 "io_failed": 0, 00:24:06.782 "io_timeout": 0, 00:24:06.782 "avg_latency_us": 37182.266021646865, 00:24:06.782 "min_latency_us": 9223.585185185186, 00:24:06.782 "max_latency_us": 33204.90666666667 00:24:06.782 } 00:24:06.782 ], 00:24:06.782 "core_count": 1 00:24:06.782 } 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1638521 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1638521 ']' 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1638521 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1638521 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1638521' 00:24:06.782 killing process with pid 1638521 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1638521 00:24:06.782 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.782 00:24:06.782 Latency(us) 00:24:06.782 [2024-10-12T23:34:52.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.782 [2024-10-12T23:34:52.360Z] =================================================================================================================== 00:24:06.782 [2024-10-12T23:34:52.360Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1638521 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Gscv29RW8P 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Gscv29RW8P 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Gscv29RW8P 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Gscv29RW8P 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Gscv29RW8P 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.782 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1639822 00:24:06.783 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:06.783 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.783 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1639822 /var/tmp/bdevperf.sock 00:24:06.783 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1639822 ']' 00:24:06.783 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.783 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.783 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.783 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.783 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.783 [2024-10-13 01:34:52.305454] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:06.783 [2024-10-13 01:34:52.305565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639822 ] 00:24:07.042 [2024-10-13 01:34:52.363887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.042 [2024-10-13 01:34:52.408055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.042 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:07.042 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:07.042 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Gscv29RW8P 00:24:07.299 [2024-10-13 01:34:52.793858] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Gscv29RW8P': 0100666 00:24:07.300 [2024-10-13 01:34:52.793896] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:07.300 request: 00:24:07.300 { 00:24:07.300 "name": "key0", 00:24:07.300 "path": "/tmp/tmp.Gscv29RW8P", 00:24:07.300 "method": "keyring_file_add_key", 00:24:07.300 "req_id": 1 00:24:07.300 } 00:24:07.300 Got JSON-RPC error response 00:24:07.300 response: 00:24:07.300 { 00:24:07.300 "code": -1, 00:24:07.300 "message": "Operation not permitted" 00:24:07.300 } 00:24:07.300 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.558 [2024-10-13 01:34:53.058669] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.558 [2024-10-13 01:34:53.058727] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:07.558 request: 00:24:07.558 { 00:24:07.558 "name": "TLSTEST", 00:24:07.558 "trtype": "tcp", 00:24:07.558 "traddr": "10.0.0.2", 00:24:07.558 "adrfam": "ipv4", 00:24:07.558 "trsvcid": "4420", 00:24:07.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.558 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.558 "prchk_reftag": false, 00:24:07.558 "prchk_guard": false, 00:24:07.558 "hdgst": false, 00:24:07.558 "ddgst": false, 00:24:07.558 "psk": "key0", 00:24:07.558 "allow_unrecognized_csi": false, 00:24:07.558 "method": "bdev_nvme_attach_controller", 00:24:07.558 "req_id": 1 00:24:07.558 } 00:24:07.558 Got JSON-RPC error response 00:24:07.558 response: 00:24:07.558 { 00:24:07.558 "code": -126, 00:24:07.558 "message": "Required key not available" 00:24:07.558 } 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1639822 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1639822 ']' 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1639822 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1639822 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1639822' 00:24:07.558 killing process with pid 1639822 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1639822 00:24:07.558 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.558 00:24:07.558 Latency(us) 00:24:07.558 [2024-10-12T23:34:53.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.558 [2024-10-12T23:34:53.136Z] =================================================================================================================== 00:24:07.558 [2024-10-12T23:34:53.136Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:07.558 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1639822 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1638312 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1638312 ']' 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1638312 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1638312 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1638312' 00:24:07.816 killing process with pid 1638312 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1638312 00:24:07.816 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1638312 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1639982 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1639982 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1639982 ']' 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.074 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.074 [2024-10-13 01:34:53.623574] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:08.074 [2024-10-13 01:34:53.623675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.332 [2024-10-13 01:34:53.690281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.332 [2024-10-13 01:34:53.734617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.332 [2024-10-13 01:34:53.734676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.332 [2024-10-13 01:34:53.734701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.332 [2024-10-13 01:34:53.734713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.332 [2024-10-13 01:34:53.734732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.332 [2024-10-13 01:34:53.735292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Gscv29RW8P 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Gscv29RW8P 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Gscv29RW8P 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Gscv29RW8P 00:24:08.332 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:08.589 [2024-10-13 01:34:54.118526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.589 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:08.846 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:09.411 [2024-10-13 01:34:54.692031] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.411 [2024-10-13 01:34:54.692300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.411 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:09.411 malloc0 00:24:09.669 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:09.927 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Gscv29RW8P 00:24:10.185 [2024-10-13 01:34:55.506259] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Gscv29RW8P': 0100666 00:24:10.185 [2024-10-13 01:34:55.506306] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:10.185 request: 00:24:10.185 { 00:24:10.185 "name": "key0", 00:24:10.185 "path": "/tmp/tmp.Gscv29RW8P", 00:24:10.185 "method": "keyring_file_add_key", 00:24:10.185 "req_id": 1 00:24:10.185 } 00:24:10.185 Got JSON-RPC error response 00:24:10.185 response: 00:24:10.185 { 00:24:10.185 "code": -1, 00:24:10.185 "message": "Operation not permitted" 00:24:10.185 } 00:24:10.185 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:10.442 [2024-10-13 01:34:55.770994] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:10.442 [2024-10-13 01:34:55.771063] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:10.442 request: 00:24:10.442 { 00:24:10.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.442 "host": "nqn.2016-06.io.spdk:host1", 00:24:10.442 "psk": "key0", 00:24:10.442 "method": "nvmf_subsystem_add_host", 00:24:10.442 "req_id": 1 00:24:10.442 } 00:24:10.442 Got JSON-RPC error response 00:24:10.442 response: 00:24:10.442 { 00:24:10.442 "code": -32603, 00:24:10.442 "message": "Internal error" 00:24:10.442 } 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1639982 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1639982 ']' 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1639982 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1639982 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1639982' 00:24:10.442 killing process with pid 1639982 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1639982 00:24:10.442 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1639982 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Gscv29RW8P 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1640389 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1640389 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1640389 ']' 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.700 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.700 [2024-10-13 01:34:56.108322] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:10.700 [2024-10-13 01:34:56.108415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.700 [2024-10-13 01:34:56.173232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.700 [2024-10-13 01:34:56.221293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.700 [2024-10-13 01:34:56.221357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.700 [2024-10-13 01:34:56.221371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.700 [2024-10-13 01:34:56.221383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.700 [2024-10-13 01:34:56.221393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.700 [2024-10-13 01:34:56.221999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.958 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.958 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:10.958 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:10.958 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.958 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.958 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.958 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Gscv29RW8P 00:24:10.958 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Gscv29RW8P 00:24:10.958 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:11.217 [2024-10-13 01:34:56.615288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.217 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:11.475 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:11.733 [2024-10-13 01:34:57.152795] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.733 [2024-10-13 01:34:57.153104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.733 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:11.991 malloc0 00:24:11.992 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:12.249 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Gscv29RW8P 00:24:12.506 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1640673 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1640673 /var/tmp/bdevperf.sock 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1640673 ']' 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.764 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:12.764 [2024-10-13 01:34:58.278936] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:12.764 [2024-10-13 01:34:58.279027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640673 ] 00:24:12.764 [2024-10-13 01:34:58.336056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.021 [2024-10-13 01:34:58.382509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.021 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.021 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:13.021 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Gscv29RW8P 00:24:13.278 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:13.536 [2024-10-13 01:34:59.018073] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:13.536 TLSTESTn1 00:24:13.536 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:14.102 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:14.102 "subsystems": [ 00:24:14.102 { 00:24:14.102 "subsystem": "keyring", 00:24:14.102 "config": [ 00:24:14.102 { 00:24:14.102 "method": "keyring_file_add_key", 00:24:14.102 "params": { 00:24:14.102 "name": "key0", 00:24:14.102 "path": "/tmp/tmp.Gscv29RW8P" 00:24:14.102 } 00:24:14.102 } 00:24:14.102 ] 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "subsystem": "iobuf", 00:24:14.102 "config": [ 00:24:14.102 { 00:24:14.102 "method": "iobuf_set_options", 00:24:14.102 "params": { 00:24:14.102 "small_pool_count": 8192, 00:24:14.102 "large_pool_count": 1024, 00:24:14.102 "small_bufsize": 8192, 00:24:14.102 "large_bufsize": 135168 00:24:14.102 } 00:24:14.102 } 00:24:14.102 ] 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "subsystem": "sock", 00:24:14.102 "config": [ 00:24:14.102 { 00:24:14.102 "method": "sock_set_default_impl", 00:24:14.102 "params": { 00:24:14.102 "impl_name": "posix" 00:24:14.102 } 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "method": "sock_impl_set_options", 00:24:14.102 "params": { 00:24:14.102 "impl_name": "ssl", 00:24:14.102 "recv_buf_size": 4096, 00:24:14.102 "send_buf_size": 4096, 00:24:14.102 "enable_recv_pipe": true, 00:24:14.102 "enable_quickack": false, 00:24:14.102 "enable_placement_id": 0, 00:24:14.102 "enable_zerocopy_send_server": true, 00:24:14.102 "enable_zerocopy_send_client": false, 00:24:14.102 "zerocopy_threshold": 0, 00:24:14.102 "tls_version": 0, 00:24:14.102 "enable_ktls": false 00:24:14.102 } 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "method": "sock_impl_set_options", 00:24:14.102 "params": { 00:24:14.102 "impl_name": "posix", 00:24:14.102 "recv_buf_size": 2097152, 00:24:14.102 "send_buf_size": 2097152, 00:24:14.102 "enable_recv_pipe": true, 00:24:14.102 "enable_quickack": false, 00:24:14.102 "enable_placement_id": 0, 00:24:14.102 "enable_zerocopy_send_server": true, 00:24:14.102 "enable_zerocopy_send_client": false, 00:24:14.102 "zerocopy_threshold": 0, 00:24:14.102 "tls_version": 0, 00:24:14.102 "enable_ktls": false 00:24:14.102 } 00:24:14.102 } 00:24:14.102 ] 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "subsystem": "vmd", 00:24:14.102 "config": [] 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "subsystem": "accel", 00:24:14.102 "config": [ 00:24:14.102 { 00:24:14.102 "method": "accel_set_options", 00:24:14.102 "params": { 00:24:14.102 "small_cache_size": 128, 00:24:14.102 "large_cache_size": 16, 00:24:14.102 "task_count": 2048, 00:24:14.102 "sequence_count": 2048, 00:24:14.102 "buf_count": 2048 00:24:14.102 } 00:24:14.102 } 00:24:14.102 ] 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "subsystem": "bdev", 00:24:14.102 "config": [ 00:24:14.102 { 00:24:14.102 "method": "bdev_set_options", 00:24:14.102 "params": { 00:24:14.102 "bdev_io_pool_size": 65535, 00:24:14.102 "bdev_io_cache_size": 256, 00:24:14.102 "bdev_auto_examine": true, 00:24:14.102 "iobuf_small_cache_size": 128, 00:24:14.102 "iobuf_large_cache_size": 16 00:24:14.102 } 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "method": "bdev_raid_set_options", 00:24:14.102 "params": { 00:24:14.102 "process_window_size_kb": 1024, 00:24:14.102 "process_max_bandwidth_mb_sec": 0 00:24:14.102 } 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "method": "bdev_iscsi_set_options", 00:24:14.102 "params": { 00:24:14.102 "timeout_sec": 30 00:24:14.102 } 00:24:14.102 }, 00:24:14.102 { 00:24:14.102 "method": "bdev_nvme_set_options", 00:24:14.102 "params": { 00:24:14.102 "action_on_timeout": "none", 00:24:14.102 "timeout_us": 0, 00:24:14.102 "timeout_admin_us": 0, 00:24:14.102 "keep_alive_timeout_ms": 10000, 00:24:14.102 "arbitration_burst": 0, 00:24:14.102 "low_priority_weight": 0, 00:24:14.102 "medium_priority_weight": 0, 00:24:14.102 "high_priority_weight": 0, 00:24:14.102 "nvme_adminq_poll_period_us": 10000, 00:24:14.102 "nvme_ioq_poll_period_us": 0, 00:24:14.102 "io_queue_requests": 0, 00:24:14.102 "delay_cmd_submit": true, 00:24:14.102 "transport_retry_count": 4, 00:24:14.102 "bdev_retry_count": 3, 00:24:14.102 "transport_ack_timeout": 0, 00:24:14.102 "ctrlr_loss_timeout_sec": 0, 00:24:14.102 "reconnect_delay_sec": 0, 00:24:14.102 "fast_io_fail_timeout_sec": 0, 00:24:14.102 "disable_auto_failback": false, 00:24:14.102 "generate_uuids": false, 00:24:14.102 "transport_tos": 0, 00:24:14.102 "nvme_error_stat": false, 00:24:14.102 "rdma_srq_size": 0, 00:24:14.102 "io_path_stat": false, 00:24:14.103 "allow_accel_sequence": false, 00:24:14.103 "rdma_max_cq_size": 0, 00:24:14.103 "rdma_cm_event_timeout_ms": 0, 00:24:14.103 "dhchap_digests": [ 00:24:14.103 "sha256", 00:24:14.103 "sha384", 00:24:14.103 "sha512" 00:24:14.103 ], 00:24:14.103 "dhchap_dhgroups": [ 00:24:14.103 "null", 00:24:14.103 "ffdhe2048", 00:24:14.103 "ffdhe3072", 00:24:14.103 "ffdhe4096", 00:24:14.103 "ffdhe6144", 00:24:14.103 "ffdhe8192" 00:24:14.103 ] 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "bdev_nvme_set_hotplug", 00:24:14.103 "params": { 00:24:14.103 "period_us": 100000, 00:24:14.103 "enable": false 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "bdev_malloc_create", 00:24:14.103 "params": { 00:24:14.103 "name": "malloc0", 00:24:14.103 "num_blocks": 8192, 00:24:14.103 "block_size": 4096, 00:24:14.103 "physical_block_size": 4096, 00:24:14.103 "uuid": "1716a8a7-99c3-4bff-8306-9ca1b53c038c", 00:24:14.103 "optimal_io_boundary": 0, 00:24:14.103 "md_size": 0, 00:24:14.103 "dif_type": 0, 00:24:14.103 "dif_is_head_of_md": false, 00:24:14.103 "dif_pi_format": 0 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "bdev_wait_for_examine" 00:24:14.103 } 00:24:14.103 ] 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "subsystem": "nbd", 00:24:14.103 "config": [] 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "subsystem": "scheduler", 00:24:14.103 "config": [ 00:24:14.103 { 00:24:14.103 "method": "framework_set_scheduler", 00:24:14.103 "params": { 00:24:14.103 "name": "static" 00:24:14.103 } 00:24:14.103 } 00:24:14.103 ] 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "subsystem": "nvmf", 00:24:14.103 "config": [ 00:24:14.103 { 00:24:14.103 "method": "nvmf_set_config", 00:24:14.103 "params": { 00:24:14.103 "discovery_filter": "match_any", 00:24:14.103 "admin_cmd_passthru": { 00:24:14.103 "identify_ctrlr": false 00:24:14.103 }, 00:24:14.103 "dhchap_digests": [ 00:24:14.103 "sha256", 00:24:14.103 "sha384", 00:24:14.103 "sha512" 00:24:14.103 ], 00:24:14.103 "dhchap_dhgroups": [ 00:24:14.103 "null", 00:24:14.103 "ffdhe2048", 00:24:14.103 "ffdhe3072", 00:24:14.103 "ffdhe4096", 00:24:14.103 "ffdhe6144", 00:24:14.103 "ffdhe8192" 00:24:14.103 ] 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "nvmf_set_max_subsystems", 00:24:14.103 "params": { 00:24:14.103 "max_subsystems": 1024 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "nvmf_set_crdt", 00:24:14.103 "params": { 00:24:14.103 "crdt1": 0, 00:24:14.103 "crdt2": 0, 00:24:14.103 "crdt3": 0 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "nvmf_create_transport", 00:24:14.103 "params": { 00:24:14.103 "trtype": "TCP", 00:24:14.103 "max_queue_depth": 128, 00:24:14.103 "max_io_qpairs_per_ctrlr": 127, 00:24:14.103 "in_capsule_data_size": 4096, 00:24:14.103 "max_io_size": 131072, 00:24:14.103 "io_unit_size": 131072, 00:24:14.103 "max_aq_depth": 128, 00:24:14.103 "num_shared_buffers": 511, 00:24:14.103 "buf_cache_size": 4294967295, 00:24:14.103 "dif_insert_or_strip": false, 00:24:14.103 "zcopy": false, 00:24:14.103 "c2h_success": false, 00:24:14.103 "sock_priority": 0, 00:24:14.103 "abort_timeout_sec": 1, 00:24:14.103 "ack_timeout": 0, 00:24:14.103 "data_wr_pool_size": 0 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "nvmf_create_subsystem", 00:24:14.103 "params": { 00:24:14.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.103 "allow_any_host": false, 00:24:14.103 "serial_number": "SPDK00000000000001", 00:24:14.103 "model_number": "SPDK bdev Controller", 00:24:14.103 "max_namespaces": 10, 00:24:14.103 "min_cntlid": 1, 00:24:14.103 "max_cntlid": 65519, 00:24:14.103 "ana_reporting": false 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "nvmf_subsystem_add_host", 00:24:14.103 "params": { 00:24:14.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.103 "host": "nqn.2016-06.io.spdk:host1", 00:24:14.103 "psk": "key0" 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "nvmf_subsystem_add_ns", 00:24:14.103 "params": { 00:24:14.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.103 "namespace": { 00:24:14.103 "nsid": 1, 00:24:14.103 "bdev_name": "malloc0", 00:24:14.103 "nguid": "1716A8A799C34BFF83069CA1B53C038C", 00:24:14.103 "uuid": "1716a8a7-99c3-4bff-8306-9ca1b53c038c", 00:24:14.103 "no_auto_visible": false 00:24:14.103 } 00:24:14.103 } 00:24:14.103 }, 00:24:14.103 { 00:24:14.103 "method": "nvmf_subsystem_add_listener", 00:24:14.103 "params": { 00:24:14.103 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.103 "listen_address": { 00:24:14.103 "trtype": "TCP", 00:24:14.103 "adrfam": "IPv4", 00:24:14.103 "traddr": "10.0.0.2", 00:24:14.103 "trsvcid": "4420" 00:24:14.103 }, 00:24:14.103 "secure_channel": true 00:24:14.103 } 00:24:14.103 } 00:24:14.103 ] 00:24:14.103 } 00:24:14.103 ] 00:24:14.103 }' 00:24:14.103 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:14.361 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:14.361 "subsystems": [ 00:24:14.361 { 00:24:14.361 "subsystem": "keyring", 00:24:14.361 "config": [ 00:24:14.361 { 00:24:14.361 "method": "keyring_file_add_key", 00:24:14.361 "params": { 00:24:14.361 "name": "key0", 00:24:14.361 "path": "/tmp/tmp.Gscv29RW8P" 00:24:14.361 } 00:24:14.361 } 00:24:14.361 ] 00:24:14.361 }, 00:24:14.361 { 00:24:14.361 "subsystem": "iobuf", 00:24:14.361 "config": [ 00:24:14.361 { 00:24:14.361 "method": "iobuf_set_options", 00:24:14.361 "params": { 00:24:14.361 "small_pool_count": 8192, 00:24:14.361 "large_pool_count": 1024, 00:24:14.361 "small_bufsize": 8192, 00:24:14.361 "large_bufsize": 135168 00:24:14.361 } 00:24:14.361 } 00:24:14.361 ] 00:24:14.361 }, 00:24:14.361 { 00:24:14.361 "subsystem": "sock", 00:24:14.361 "config": [ 00:24:14.361 { 00:24:14.361 "method": "sock_set_default_impl", 00:24:14.361 "params": { 00:24:14.361 "impl_name": "posix" 00:24:14.361 } 00:24:14.361 }, 00:24:14.361 { 00:24:14.361 "method": "sock_impl_set_options", 00:24:14.361 "params": { 00:24:14.362 "impl_name": "ssl", 00:24:14.362 "recv_buf_size": 4096, 00:24:14.362 "send_buf_size": 4096, 00:24:14.362 "enable_recv_pipe": true, 00:24:14.362 "enable_quickack": false, 00:24:14.362 "enable_placement_id": 0, 00:24:14.362 "enable_zerocopy_send_server": true, 00:24:14.362 "enable_zerocopy_send_client": false, 00:24:14.362 "zerocopy_threshold": 0, 00:24:14.362 "tls_version": 0, 00:24:14.362 "enable_ktls": false 00:24:14.362 } 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "method": "sock_impl_set_options", 00:24:14.362 "params": { 00:24:14.362 "impl_name": "posix", 00:24:14.362 "recv_buf_size": 2097152, 00:24:14.362 "send_buf_size": 2097152, 00:24:14.362 "enable_recv_pipe": true, 00:24:14.362 "enable_quickack": false, 00:24:14.362 "enable_placement_id": 0, 00:24:14.362 "enable_zerocopy_send_server": true, 00:24:14.362 "enable_zerocopy_send_client": false, 00:24:14.362 "zerocopy_threshold": 0, 00:24:14.362 "tls_version": 0, 00:24:14.362 "enable_ktls": false 00:24:14.362 } 00:24:14.362 } 00:24:14.362 ] 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "subsystem": "vmd", 00:24:14.362 "config": [] 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "subsystem": "accel", 00:24:14.362 "config": [ 00:24:14.362 { 00:24:14.362 "method": "accel_set_options", 00:24:14.362 "params": { 00:24:14.362 "small_cache_size": 128, 00:24:14.362 "large_cache_size": 16, 00:24:14.362 "task_count": 2048, 00:24:14.362 "sequence_count": 2048, 00:24:14.362 "buf_count": 2048 00:24:14.362 } 00:24:14.362 } 00:24:14.362 ] 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "subsystem": "bdev", 00:24:14.362 "config": [ 00:24:14.362 { 00:24:14.362 "method": "bdev_set_options", 00:24:14.362 "params": { 00:24:14.362 "bdev_io_pool_size": 65535, 00:24:14.362 "bdev_io_cache_size": 256, 00:24:14.362 "bdev_auto_examine": true, 00:24:14.362 "iobuf_small_cache_size": 128, 00:24:14.362 "iobuf_large_cache_size": 16 00:24:14.362 } 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "method": "bdev_raid_set_options", 00:24:14.362 "params": { 00:24:14.362 "process_window_size_kb": 1024, 00:24:14.362 "process_max_bandwidth_mb_sec": 0 00:24:14.362 } 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "method": "bdev_iscsi_set_options", 00:24:14.362 "params": { 00:24:14.362 "timeout_sec": 30 00:24:14.362 } 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "method": "bdev_nvme_set_options", 00:24:14.362 "params": { 00:24:14.362 "action_on_timeout": "none", 00:24:14.362 "timeout_us": 0, 00:24:14.362 "timeout_admin_us": 0, 00:24:14.362 "keep_alive_timeout_ms": 10000, 00:24:14.362 "arbitration_burst": 0, 00:24:14.362 "low_priority_weight": 0, 00:24:14.362 "medium_priority_weight": 0, 00:24:14.362 "high_priority_weight": 0, 00:24:14.362 "nvme_adminq_poll_period_us": 10000, 00:24:14.362 "nvme_ioq_poll_period_us": 0, 00:24:14.362 "io_queue_requests": 512, 00:24:14.362 "delay_cmd_submit": true, 00:24:14.362 "transport_retry_count": 4, 00:24:14.362 "bdev_retry_count": 3, 00:24:14.362 "transport_ack_timeout": 0, 00:24:14.362 "ctrlr_loss_timeout_sec": 0, 00:24:14.362 "reconnect_delay_sec": 0, 00:24:14.362 "fast_io_fail_timeout_sec": 0, 00:24:14.362 "disable_auto_failback": false, 00:24:14.362 "generate_uuids": false, 00:24:14.362 "transport_tos": 0, 00:24:14.362 "nvme_error_stat": false, 00:24:14.362 "rdma_srq_size": 0, 00:24:14.362 "io_path_stat": false, 00:24:14.362 "allow_accel_sequence": false, 00:24:14.362 "rdma_max_cq_size": 0, 00:24:14.362 "rdma_cm_event_timeout_ms": 0, 00:24:14.362 "dhchap_digests": [ 00:24:14.362 "sha256", 00:24:14.362 "sha384", 00:24:14.362 "sha512" 00:24:14.362 ], 00:24:14.362 "dhchap_dhgroups": [ 00:24:14.362 "null", 00:24:14.362 "ffdhe2048", 00:24:14.362 "ffdhe3072", 00:24:14.362 "ffdhe4096", 00:24:14.362 "ffdhe6144", 00:24:14.362 "ffdhe8192" 00:24:14.362 ] 00:24:14.362 } 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "method": "bdev_nvme_attach_controller", 00:24:14.362 "params": { 00:24:14.362 "name": "TLSTEST", 00:24:14.362 "trtype": "TCP", 00:24:14.362 "adrfam": "IPv4", 00:24:14.362 "traddr": "10.0.0.2", 00:24:14.362 "trsvcid": "4420", 00:24:14.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.362 "prchk_reftag": false, 00:24:14.362 "prchk_guard": false, 00:24:14.362 "ctrlr_loss_timeout_sec": 0, 00:24:14.362 "reconnect_delay_sec": 0, 00:24:14.362 "fast_io_fail_timeout_sec": 0, 00:24:14.362 "psk": "key0", 00:24:14.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:14.362 "hdgst": false, 00:24:14.362 "ddgst": false, 00:24:14.362 "multipath": "multipath" 00:24:14.362 } 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "method": "bdev_nvme_set_hotplug", 00:24:14.362 "params": { 00:24:14.362 "period_us": 100000, 00:24:14.362 "enable": false 00:24:14.362 } 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "method": "bdev_wait_for_examine" 00:24:14.362 } 00:24:14.362 ] 00:24:14.362 }, 00:24:14.362 { 00:24:14.362 "subsystem": "nbd", 00:24:14.362 "config": [] 00:24:14.362 } 00:24:14.362 ] 00:24:14.362 }' 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1640673 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1640673 ']' 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1640673 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1640673 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1640673' 00:24:14.362 killing process with pid 1640673 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1640673 00:24:14.362 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.362 00:24:14.362 Latency(us) 00:24:14.362 [2024-10-12T23:34:59.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.362 [2024-10-12T23:34:59.940Z] =================================================================================================================== 00:24:14.362 [2024-10-12T23:34:59.940Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:14.362 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1640673 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1640389 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1640389 ']' 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1640389 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1640389 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1640389' 00:24:14.619 killing process with pid 1640389 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1640389 00:24:14.619 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1640389 00:24:14.878 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:14.878 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:14.878 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:14.878 "subsystems": [ 00:24:14.878 { 00:24:14.878 "subsystem": "keyring", 00:24:14.878 "config": [ 00:24:14.878 { 00:24:14.878 "method": "keyring_file_add_key", 00:24:14.878 "params": { 00:24:14.878 "name": "key0", 00:24:14.878 "path": "/tmp/tmp.Gscv29RW8P" 00:24:14.878 } 00:24:14.878 } 00:24:14.878 ] 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "subsystem": "iobuf", 00:24:14.878 "config": [ 00:24:14.878 { 00:24:14.878 "method": "iobuf_set_options", 00:24:14.878 "params": { 00:24:14.878 "small_pool_count": 8192, 00:24:14.878 "large_pool_count": 1024, 00:24:14.878 "small_bufsize": 8192, 00:24:14.878 "large_bufsize": 135168 00:24:14.878 } 00:24:14.878 } 00:24:14.878 ] 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "subsystem": "sock", 00:24:14.878 "config": [ 00:24:14.878 { 00:24:14.878 "method": "sock_set_default_impl", 00:24:14.878 "params": { 00:24:14.878 "impl_name": "posix" 00:24:14.878 } 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "method": "sock_impl_set_options", 00:24:14.878 "params": { 00:24:14.878 "impl_name": "ssl", 00:24:14.878 "recv_buf_size": 4096, 00:24:14.878 "send_buf_size": 4096, 00:24:14.878 "enable_recv_pipe": true, 00:24:14.878 "enable_quickack": false, 00:24:14.878 "enable_placement_id": 0, 00:24:14.878 "enable_zerocopy_send_server": true, 00:24:14.878 "enable_zerocopy_send_client": false, 00:24:14.878 "zerocopy_threshold": 0, 00:24:14.878 "tls_version": 0, 00:24:14.878 "enable_ktls": false 00:24:14.878 } 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "method": "sock_impl_set_options", 00:24:14.878 "params": { 00:24:14.878 "impl_name": "posix", 00:24:14.878 "recv_buf_size": 2097152, 00:24:14.878 "send_buf_size": 2097152, 00:24:14.878 "enable_recv_pipe": true, 00:24:14.878 "enable_quickack": false, 00:24:14.878 "enable_placement_id": 0, 00:24:14.878 "enable_zerocopy_send_server": true, 00:24:14.878 "enable_zerocopy_send_client": false, 00:24:14.878 "zerocopy_threshold": 0, 00:24:14.878 "tls_version": 0, 00:24:14.878 "enable_ktls": false 00:24:14.878 } 00:24:14.878 } 00:24:14.878 ] 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "subsystem": "vmd", 00:24:14.878 "config": [] 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "subsystem": "accel", 00:24:14.878 "config": [ 00:24:14.878 { 00:24:14.878 "method": "accel_set_options", 00:24:14.878 "params": { 00:24:14.878 "small_cache_size": 128, 00:24:14.878 "large_cache_size": 16, 00:24:14.878 "task_count": 2048, 00:24:14.878 "sequence_count": 2048, 00:24:14.878 "buf_count": 2048 00:24:14.878 } 00:24:14.878 } 00:24:14.878 ] 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "subsystem": "bdev", 00:24:14.878 "config": [ 00:24:14.878 { 00:24:14.878 "method": "bdev_set_options", 00:24:14.878 "params": { 00:24:14.878 "bdev_io_pool_size": 65535, 00:24:14.878 "bdev_io_cache_size": 256, 00:24:14.878 "bdev_auto_examine": true, 00:24:14.878 "iobuf_small_cache_size": 128, 00:24:14.878 "iobuf_large_cache_size": 16 00:24:14.878 } 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "method": "bdev_raid_set_options", 00:24:14.878 "params": { 00:24:14.878 "process_window_size_kb": 1024, 00:24:14.878 "process_max_bandwidth_mb_sec": 0 00:24:14.878 } 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "method": "bdev_iscsi_set_options", 00:24:14.878 "params": { 00:24:14.878 "timeout_sec": 30 00:24:14.878 } 00:24:14.878 }, 00:24:14.878 { 00:24:14.878 "method": "bdev_nvme_set_options", 00:24:14.878 "params": { 00:24:14.878 "action_on_timeout": "none", 00:24:14.878 "timeout_us": 0, 00:24:14.878 "timeout_admin_us": 0, 00:24:14.878 "keep_alive_timeout_ms": 10000, 00:24:14.878 "arbitration_burst": 0, 00:24:14.878 "low_priority_weight": 0, 00:24:14.878 "medium_priority_weight": 0, 00:24:14.878 "high_priority_weight": 0, 00:24:14.878 "nvme_adminq_poll_period_us": 10000, 00:24:14.878 "nvme_ioq_poll_period_us": 0, 00:24:14.878 "io_queue_requests": 0, 00:24:14.878 "delay_cmd_submit": true, 00:24:14.878 "transport_retry_count": 4, 00:24:14.878 "bdev_retry_count": 3, 00:24:14.878 "transport_ack_timeout": 0, 00:24:14.878 "ctrlr_loss_timeout_sec": 0, 00:24:14.878 "reconnect_delay_sec": 0, 00:24:14.878 "fast_io_fail_timeout_sec": 0, 00:24:14.878 "disable_auto_failback": false, 00:24:14.878 "generate_uuids": false, 00:24:14.878 "transport_tos": 0, 00:24:14.878 "nvme_error_stat": false, 00:24:14.878 "rdma_srq_size": 0, 00:24:14.878 "io_path_stat": false, 00:24:14.878 "allow_accel_sequence": false, 00:24:14.878 "rdma_max_cq_size": 0, 00:24:14.878 "rdma_cm_event_timeout_ms": 0, 00:24:14.878 "dhchap_digests": [ 00:24:14.878 "sha256", 00:24:14.878 "sha384", 00:24:14.878 "sha512" 00:24:14.878 ], 00:24:14.878 "dhchap_dhgroups": [ 00:24:14.878 "null", 00:24:14.878 "ffdhe2048", 00:24:14.878 "ffdhe3072", 00:24:14.878 "ffdhe4096", 00:24:14.878 "ffdhe6144", 00:24:14.878 "ffdhe8192" 00:24:14.878 ] 00:24:14.878 } 00:24:14.878 }, 00:24:14.879 { 00:24:14.879 "method": "bdev_nvme_set_hotplug", 00:24:14.879 "params": { 00:24:14.879 "period_us": 100000, 00:24:14.879 "enable": false 00:24:14.879 } 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "method": "bdev_malloc_create", 00:24:14.879 "params": { 00:24:14.879 "name": "malloc0", 00:24:14.879 "num_blocks": 8192, 00:24:14.879 "block_size": 4096, 00:24:14.879 "physical_block_size": 4096, 00:24:14.879 "uuid": "1716a8a7-99c3-4bff-8306-9ca1b53c038c", 00:24:14.879 "optimal_io_boundary": 0, 00:24:14.879 "md_size": 0, 00:24:14.879 "dif_type": 0, 00:24:14.879 "dif_is_head_of_md": false, 00:24:14.879 "dif_pi_format": 0 00:24:14.879 } 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "method": "bdev_wait_for_examine" 00:24:14.879 } 00:24:14.879 ] 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "subsystem": "nbd", 00:24:14.879 "config": [] 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "subsystem": "scheduler", 00:24:14.879 "config": [ 00:24:14.879 { 00:24:14.879 "method": "framework_set_scheduler", 00:24:14.879 "params": { 00:24:14.879 "name": "static" 00:24:14.879 } 00:24:14.879 } 00:24:14.879 ] 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "subsystem": "nvmf", 00:24:14.879 "config": [ 00:24:14.879 { 00:24:14.879 "method": "nvmf_set_config", 00:24:14.879 "params": { 00:24:14.879 "discovery_filter": "match_any", 00:24:14.879 "admin_cmd_passthru": { 00:24:14.879 "identify_ctrlr": false 00:24:14.879 }, 00:24:14.879 "dhchap_digests": [ 00:24:14.879 "sha256", 00:24:14.879 "sha384", 00:24:14.879 "sha512" 00:24:14.879 ], 00:24:14.879 "dhchap_dhgroups": [ 00:24:14.879 "null", 00:24:14.879 "ffdhe2048", 00:24:14.879 "ffdhe3072", 00:24:14.879 "ffdhe4096", 00:24:14.879 "ffdhe6144", 00:24:14.879 "ffdhe8192" 00:24:14.879 ] 00:24:14.879 } 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "method": "nvmf_set_max_subsystems", 00:24:14.879 "params": { 00:24:14.879 "max_subsystems": 1024 00:24:14.879 } 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "method": "nvmf_set_crdt", 00:24:14.879 "params": { 00:24:14.879 "crdt1": 0, 00:24:14.879 "crdt2": 0, 00:24:14.879 "crdt3": 0 00:24:14.879 } 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "method": "nvmf_create_transport", 00:24:14.879 "params": { 00:24:14.879 "trtype": "TCP", 00:24:14.879 "max_queue_depth": 128, 00:24:14.879 "max_io_qpairs_per_ctrlr": 127, 00:24:14.879 "in_capsule_data_size": 4096, 00:24:14.879 "max_io_size": 131072, 00:24:14.879 "io_unit_size": 131072, 00:24:14.879 "max_aq_depth": 128, 00:24:14.879 "num_shared_buffers": 511, 00:24:14.879 "buf_cache_size": 4294967295, 00:24:14.879 "dif_insert_or_strip": false, 00:24:14.879 "zcopy": false, 00:24:14.879 "c2h_success": false, 00:24:14.879 "sock_priority": 0, 00:24:14.879 "abort_timeout_sec": 1, 00:24:14.879 "ack_timeout": 0, 00:24:14.879 "data_wr_pool_size": 0 00:24:14.879 } 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "method": "nvmf_create_subsystem", 00:24:14.879 "params": { 00:24:14.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.879 "allow_any_host": false, 00:24:14.879 "serial_number": "SPDK00000000000001", 00:24:14.879 "model_number": "SPDK bdev Controller", 00:24:14.879 "max_namespaces": 10, 00:24:14.879 "min_cntlid": 1, 00:24:14.879 "max_cntlid": 65519, 00:24:14.879 "ana_reporting": false 00:24:14.879 } 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "method": "nvmf_subsystem_add_host", 00:24:14.879 "params": { 00:24:14.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.879 "host": "nqn.2016-06.io.spdk:host1", 00:24:14.879 "psk": "key0" 00:24:14.879 } 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "method": "nvmf_subsystem_add_ns", 00:24:14.879 "params": { 00:24:14.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.879 "namespace": { 00:24:14.879 "nsid": 1, 00:24:14.879 "bdev_name": "malloc0", 00:24:14.879 "nguid": "1716A8A799C34BFF83069CA1B53C038C", 00:24:14.879 "uuid": "1716a8a7-99c3-4bff-8306-9ca1b53c038c", 00:24:14.879 "no_auto_visible": false 00:24:14.879 } 00:24:14.879 } 00:24:14.879 }, 00:24:14.879 { 00:24:14.879 "method": "nvmf_subsystem_add_listener", 00:24:14.879 "params": { 00:24:14.879 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.879 "listen_address": { 00:24:14.879 "trtype": "TCP", 00:24:14.879 "adrfam": "IPv4", 00:24:14.879 "traddr": "10.0.0.2", 00:24:14.879 "trsvcid": "4420" 00:24:14.879 }, 00:24:14.879 "secure_channel": true 00:24:14.879 } 00:24:14.879 } 00:24:14.879 ] 00:24:14.879 } 00:24:14.879 ] 00:24:14.879 }' 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1640848 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1640848 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1640848 ']' 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.879 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:14.879 [2024-10-13 01:35:00.361420] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:14.879 [2024-10-13 01:35:00.361535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.879 [2024-10-13 01:35:00.434038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.138 [2024-10-13 01:35:00.482271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.138 [2024-10-13 01:35:00.482327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.138 [2024-10-13 01:35:00.482351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.138 [2024-10-13 01:35:00.482365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.138 [2024-10-13 01:35:00.482377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.138 [2024-10-13 01:35:00.483073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.396 [2024-10-13 01:35:00.728081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.396 [2024-10-13 01:35:00.760081] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:15.396 [2024-10-13 01:35:00.760390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1640989 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1640989 /var/tmp/bdevperf.sock 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1640989 ']' 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:15.962 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:15.962 "subsystems": [ 00:24:15.962 { 00:24:15.962 "subsystem": "keyring", 00:24:15.962 "config": [ 00:24:15.962 { 00:24:15.962 "method": "keyring_file_add_key", 00:24:15.962 "params": { 00:24:15.962 "name": "key0", 00:24:15.962 "path": "/tmp/tmp.Gscv29RW8P" 00:24:15.962 } 00:24:15.962 } 00:24:15.962 ] 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "subsystem": "iobuf", 00:24:15.962 "config": [ 00:24:15.962 { 00:24:15.962 "method": "iobuf_set_options", 00:24:15.962 "params": { 00:24:15.962 "small_pool_count": 8192, 00:24:15.962 "large_pool_count": 1024, 00:24:15.962 "small_bufsize": 8192, 00:24:15.962 "large_bufsize": 135168 00:24:15.962 } 00:24:15.962 } 00:24:15.962 ] 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "subsystem": "sock", 00:24:15.962 "config": [ 00:24:15.962 { 00:24:15.962 "method": "sock_set_default_impl", 00:24:15.962 "params": { 00:24:15.962 "impl_name": "posix" 00:24:15.962 } 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "method": "sock_impl_set_options", 00:24:15.962 "params": { 00:24:15.962 "impl_name": "ssl", 00:24:15.962 "recv_buf_size": 4096, 00:24:15.962 "send_buf_size": 4096, 00:24:15.962 "enable_recv_pipe": true, 00:24:15.962 "enable_quickack": false, 00:24:15.962 "enable_placement_id": 0, 00:24:15.962 "enable_zerocopy_send_server": true, 00:24:15.962 "enable_zerocopy_send_client": false, 00:24:15.962 "zerocopy_threshold": 0, 00:24:15.962 "tls_version": 0, 00:24:15.962 "enable_ktls": false 00:24:15.962 } 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "method": "sock_impl_set_options", 00:24:15.962 "params": { 00:24:15.962 "impl_name": "posix", 00:24:15.962 "recv_buf_size": 2097152, 00:24:15.962 "send_buf_size": 2097152, 00:24:15.962 "enable_recv_pipe": true, 00:24:15.962 "enable_quickack": false, 00:24:15.962 "enable_placement_id": 0, 00:24:15.962 "enable_zerocopy_send_server": true, 00:24:15.962 "enable_zerocopy_send_client": false, 00:24:15.962 "zerocopy_threshold": 0, 00:24:15.962 "tls_version": 0, 00:24:15.962 "enable_ktls": false 00:24:15.962 } 00:24:15.962 } 00:24:15.962 ] 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "subsystem": "vmd", 00:24:15.962 "config": [] 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "subsystem": "accel", 00:24:15.962 "config": [ 00:24:15.962 { 00:24:15.962 "method": "accel_set_options", 00:24:15.962 "params": { 00:24:15.962 "small_cache_size": 128, 00:24:15.962 "large_cache_size": 16, 00:24:15.962 "task_count": 2048, 00:24:15.962 "sequence_count": 2048, 00:24:15.962 "buf_count": 2048 00:24:15.962 } 00:24:15.962 } 00:24:15.962 ] 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "subsystem": "bdev", 00:24:15.962 "config": [ 00:24:15.962 { 00:24:15.962 "method": "bdev_set_options", 00:24:15.962 "params": { 00:24:15.962 "bdev_io_pool_size": 65535, 00:24:15.962 "bdev_io_cache_size": 256, 00:24:15.962 "bdev_auto_examine": true, 00:24:15.962 "iobuf_small_cache_size": 128, 00:24:15.962 "iobuf_large_cache_size": 16 00:24:15.962 } 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "method": "bdev_raid_set_options", 00:24:15.962 "params": { 00:24:15.962 "process_window_size_kb": 1024, 00:24:15.962 "process_max_bandwidth_mb_sec": 0 00:24:15.962 } 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "method": "bdev_iscsi_set_options", 00:24:15.962 "params": { 00:24:15.962 "timeout_sec": 30 00:24:15.962 } 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "method": "bdev_nvme_set_options", 00:24:15.962 "params": { 00:24:15.962 "action_on_timeout": "none", 00:24:15.962 "timeout_us": 0, 00:24:15.962 "timeout_admin_us": 0, 00:24:15.962 "keep_alive_timeout_ms": 10000, 00:24:15.962 "arbitration_burst": 0, 00:24:15.962 "low_priority_weight": 0, 00:24:15.962 "medium_priority_weight": 0, 00:24:15.962 "high_priority_weight": 0, 00:24:15.962 "nvme_adminq_poll_period_us": 10000, 00:24:15.962 "nvme_ioq_poll_period_us": 0, 00:24:15.962 "io_queue_requests": 512, 00:24:15.962 "delay_cmd_submit": true, 00:24:15.962 "transport_retry_count": 4, 00:24:15.962 "bdev_retry_count": 3, 00:24:15.962 "transport_ack_timeout": 0, 00:24:15.962 "ctrlr_loss_timeout_sec": 0, 00:24:15.962 "reconnect_delay_sec": 0, 00:24:15.962 "fast_io_fail_timeout_sec": 0, 00:24:15.962 "disable_auto_failback": false, 00:24:15.962 "generate_uuids": false, 00:24:15.962 "transport_tos": 0, 00:24:15.962 "nvme_error_stat": false, 00:24:15.962 "rdma_srq_size": 0, 00:24:15.962 "io_path_stat": false, 00:24:15.962 "allow_accel_sequence": false, 00:24:15.962 "rdma_max_cq_size": 0, 00:24:15.962 "rdma_cm_event_timeout_ms": 0, 00:24:15.962 "dhchap_digests": [ 00:24:15.962 "sha256", 00:24:15.962 "sha384", 00:24:15.962 "sha512" 00:24:15.962 ], 00:24:15.962 "dhchap_dhgroups": [ 00:24:15.962 "null", 00:24:15.962 "ffdhe2048", 00:24:15.962 "ffdhe3072", 00:24:15.962 "ffdhe4096", 00:24:15.962 "ffdhe6144", 00:24:15.962 "ffdhe8192" 00:24:15.962 ] 00:24:15.962 } 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "method": "bdev_nvme_attach_controller", 00:24:15.962 "params": { 00:24:15.962 "name": "TLSTEST", 00:24:15.962 "trtype": "TCP", 00:24:15.962 "adrfam": "IPv4", 00:24:15.962 "traddr": "10.0.0.2", 00:24:15.962 "trsvcid": "4420", 00:24:15.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.962 "prchk_reftag": false, 00:24:15.962 "prchk_guard": false, 00:24:15.962 "ctrlr_loss_timeout_sec": 0, 00:24:15.962 "reconnect_delay_sec": 0, 00:24:15.962 "fast_io_fail_timeout_sec": 0, 00:24:15.962 "psk": "key0", 00:24:15.962 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:15.962 "hdgst": false, 00:24:15.962 "ddgst": false, 00:24:15.962 "multipath": "multipath" 00:24:15.962 } 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "method": "bdev_nvme_set_hotplug", 00:24:15.962 "params": { 00:24:15.962 "period_us": 100000, 00:24:15.962 "enable": false 00:24:15.962 } 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "method": "bdev_wait_for_examine" 00:24:15.962 } 00:24:15.962 ] 00:24:15.962 }, 00:24:15.962 { 00:24:15.962 "subsystem": "nbd", 00:24:15.962 "config": [] 00:24:15.962 } 00:24:15.962 ] 00:24:15.963 }' 00:24:15.963 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.963 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:15.963 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.963 [2024-10-13 01:35:01.405862] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:15.963 [2024-10-13 01:35:01.405958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1640989 ] 00:24:15.963 [2024-10-13 01:35:01.465922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.963 [2024-10-13 01:35:01.512435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.220 [2024-10-13 01:35:01.682788] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.220 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.220 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:16.478 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:16.478 Running I/O for 10 seconds... 00:24:18.349 3452.00 IOPS, 13.48 MiB/s [2024-10-12T23:35:05.303Z] 3514.50 IOPS, 13.73 MiB/s [2024-10-12T23:35:06.234Z] 3535.00 IOPS, 13.81 MiB/s [2024-10-12T23:35:07.173Z] 3498.75 IOPS, 13.67 MiB/s [2024-10-12T23:35:08.104Z] 3503.40 IOPS, 13.69 MiB/s [2024-10-12T23:35:09.065Z] 3475.83 IOPS, 13.58 MiB/s [2024-10-12T23:35:09.998Z] 3485.71 IOPS, 13.62 MiB/s [2024-10-12T23:35:10.932Z] 3495.88 IOPS, 13.66 MiB/s [2024-10-12T23:35:12.306Z] 3505.56 IOPS, 13.69 MiB/s [2024-10-12T23:35:12.306Z] 3505.10 IOPS, 13.69 MiB/s 00:24:26.728 Latency(us) 00:24:26.728 [2024-10-12T23:35:12.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.728 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:26.728 Verification LBA range: start 0x0 length 0x2000 00:24:26.728 TLSTESTn1 : 10.02 3510.99 13.71 0.00 0.00 36398.29 7524.50 30874.74 00:24:26.728 [2024-10-12T23:35:12.306Z] =================================================================================================================== 00:24:26.728 [2024-10-12T23:35:12.306Z] Total : 3510.99 13.71 0.00 0.00 36398.29 7524.50 30874.74 00:24:26.728 { 00:24:26.728 "results": [ 00:24:26.728 { 00:24:26.728 "job": "TLSTESTn1", 00:24:26.728 "core_mask": "0x4", 00:24:26.728 "workload": "verify", 00:24:26.728 "status": "finished", 00:24:26.728 "verify_range": { 00:24:26.728 "start": 0, 00:24:26.728 "length": 8192 00:24:26.728 }, 00:24:26.728 "queue_depth": 128, 00:24:26.728 "io_size": 4096, 00:24:26.728 "runtime": 10.019402, 00:24:26.728 "iops": 3510.987981119033, 00:24:26.728 "mibps": 13.714796801246223, 00:24:26.728 "io_failed": 0, 00:24:26.728 "io_timeout": 0, 00:24:26.728 "avg_latency_us": 36398.286448222054, 00:24:26.728 "min_latency_us": 7524.503703703704, 00:24:26.728 "max_latency_us": 30874.737777777777 00:24:26.728 } 00:24:26.729 ], 00:24:26.729 "core_count": 1 00:24:26.729 } 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1640989 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1640989 ']' 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1640989 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1640989 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1640989' 00:24:26.729 killing process with pid 1640989 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1640989 00:24:26.729 Received shutdown signal, test time was about 10.000000 seconds 00:24:26.729 00:24:26.729 Latency(us) 00:24:26.729 [2024-10-12T23:35:12.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:26.729 [2024-10-12T23:35:12.307Z] =================================================================================================================== 00:24:26.729 [2024-10-12T23:35:12.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:26.729 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1640989 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1640848 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1640848 ']' 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1640848 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1640848 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1640848' 00:24:26.729 killing process with pid 1640848 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1640848 00:24:26.729 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1640848 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1642308 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1642308 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1642308 ']' 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.987 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.987 [2024-10-13 01:35:12.496113] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:26.987 [2024-10-13 01:35:12.496203] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.987 [2024-10-13 01:35:12.563613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.245 [2024-10-13 01:35:12.608610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.245 [2024-10-13 01:35:12.608679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.245 [2024-10-13 01:35:12.608706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.245 [2024-10-13 01:35:12.608720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.245 [2024-10-13 01:35:12.608731] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.245 [2024-10-13 01:35:12.609369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.245 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.245 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:27.245 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:27.245 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.245 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:27.245 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.245 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Gscv29RW8P 00:24:27.245 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Gscv29RW8P 00:24:27.245 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:27.503 [2024-10-13 01:35:13.002115] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.503 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:27.761 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:28.018 [2024-10-13 01:35:13.543568] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.018 [2024-10-13 01:35:13.543853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.018 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:28.276 malloc0 00:24:28.276 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:28.842 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Gscv29RW8P 00:24:29.099 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1642594 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1642594 /var/tmp/bdevperf.sock 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1642594 ']' 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.358 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:29.358 [2024-10-13 01:35:14.819299] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:29.358 [2024-10-13 01:35:14.819394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1642594 ] 00:24:29.358 [2024-10-13 01:35:14.881040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.358 [2024-10-13 01:35:14.932570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.616 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.616 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:29.616 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Gscv29RW8P 00:24:29.873 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:30.131 [2024-10-13 01:35:15.579161] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:30.131 nvme0n1 00:24:30.131 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:30.389 Running I/O for 1 seconds... 00:24:31.322 3003.00 IOPS, 11.73 MiB/s 00:24:31.322 Latency(us) 00:24:31.322 [2024-10-12T23:35:16.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.322 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:31.322 Verification LBA range: start 0x0 length 0x2000 00:24:31.322 nvme0n1 : 1.03 3034.80 11.85 0.00 0.00 41628.12 9175.04 54758.97 00:24:31.322 [2024-10-12T23:35:16.900Z] =================================================================================================================== 00:24:31.322 [2024-10-12T23:35:16.900Z] Total : 3034.80 11.85 0.00 0.00 41628.12 9175.04 54758.97 00:24:31.322 { 00:24:31.322 "results": [ 00:24:31.322 { 00:24:31.322 "job": "nvme0n1", 00:24:31.322 "core_mask": "0x2", 00:24:31.322 "workload": "verify", 00:24:31.322 "status": "finished", 00:24:31.322 "verify_range": { 00:24:31.322 "start": 0, 00:24:31.322 "length": 8192 00:24:31.322 }, 00:24:31.322 "queue_depth": 128, 00:24:31.322 "io_size": 4096, 00:24:31.322 "runtime": 1.032029, 00:24:31.322 "iops": 3034.7984407414906, 00:24:31.322 "mibps": 11.854681409146448, 00:24:31.322 "io_failed": 0, 00:24:31.322 "io_timeout": 0, 00:24:31.322 "avg_latency_us": 41628.1240357599, 00:24:31.322 "min_latency_us": 9175.04, 00:24:31.322 "max_latency_us": 54758.96888888889 00:24:31.322 } 00:24:31.322 ], 00:24:31.322 "core_count": 1 00:24:31.322 } 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1642594 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1642594 ']' 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1642594 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1642594 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1642594' 00:24:31.322 killing process with pid 1642594 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1642594 00:24:31.322 Received shutdown signal, test time was about 1.000000 seconds 00:24:31.322 00:24:31.322 Latency(us) 00:24:31.322 [2024-10-12T23:35:16.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.322 [2024-10-12T23:35:16.900Z] =================================================================================================================== 00:24:31.322 [2024-10-12T23:35:16.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.322 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1642594 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1642308 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1642308 ']' 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1642308 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1642308 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1642308' 00:24:31.626 killing process with pid 1642308 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1642308 00:24:31.626 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1642308 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1642879 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1642879 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1642879 ']' 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:31.905 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.905 [2024-10-13 01:35:17.319115] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:31.905 [2024-10-13 01:35:17.319213] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.905 [2024-10-13 01:35:17.385974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.905 [2024-10-13 01:35:17.437029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.905 [2024-10-13 01:35:17.437094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.905 [2024-10-13 01:35:17.437110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.905 [2024-10-13 01:35:17.437124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.905 [2024-10-13 01:35:17.437135] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.905 [2024-10-13 01:35:17.437797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.163 [2024-10-13 01:35:17.624055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.163 malloc0 00:24:32.163 [2024-10-13 01:35:17.656448] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:32.163 [2024-10-13 01:35:17.656745] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1643018 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1643018 /var/tmp/bdevperf.sock 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1643018 ']' 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:32.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:32.163 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:32.163 [2024-10-13 01:35:17.730525] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:32.163 [2024-10-13 01:35:17.730601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1643018 ] 00:24:32.421 [2024-10-13 01:35:17.788661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.421 [2024-10-13 01:35:17.836055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.421 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:32.421 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:32.421 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Gscv29RW8P 00:24:32.986 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:32.986 [2024-10-13 01:35:18.546649] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.243 nvme0n1 00:24:33.243 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.243 Running I/O for 1 seconds... 00:24:34.617 3407.00 IOPS, 13.31 MiB/s 00:24:34.617 Latency(us) 00:24:34.617 [2024-10-12T23:35:20.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.617 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:34.617 Verification LBA range: start 0x0 length 0x2000 00:24:34.617 nvme0n1 : 1.03 3440.11 13.44 0.00 0.00 36671.78 9175.04 30098.01 00:24:34.617 [2024-10-12T23:35:20.195Z] =================================================================================================================== 00:24:34.617 [2024-10-12T23:35:20.195Z] Total : 3440.11 13.44 0.00 0.00 36671.78 9175.04 30098.01 00:24:34.617 { 00:24:34.617 "results": [ 00:24:34.617 { 00:24:34.617 "job": "nvme0n1", 00:24:34.617 "core_mask": "0x2", 00:24:34.617 "workload": "verify", 00:24:34.617 "status": "finished", 00:24:34.617 "verify_range": { 00:24:34.617 "start": 0, 00:24:34.617 "length": 8192 00:24:34.617 }, 00:24:34.617 "queue_depth": 128, 00:24:34.617 "io_size": 4096, 00:24:34.617 "runtime": 1.027875, 00:24:34.617 "iops": 3440.107016903806, 00:24:34.617 "mibps": 13.437918034780493, 00:24:34.617 "io_failed": 0, 00:24:34.617 "io_timeout": 0, 00:24:34.617 "avg_latency_us": 36671.782483660136, 00:24:34.617 "min_latency_us": 9175.04, 00:24:34.617 "max_latency_us": 30098.014814814815 00:24:34.617 } 00:24:34.618 ], 00:24:34.618 "core_count": 1 00:24:34.618 } 00:24:34.618 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:34.618 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.618 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.618 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.618 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:34.618 "subsystems": [ 00:24:34.618 { 00:24:34.618 "subsystem": "keyring", 00:24:34.618 "config": [ 00:24:34.618 { 00:24:34.618 "method": "keyring_file_add_key", 00:24:34.618 "params": { 00:24:34.618 "name": "key0", 00:24:34.618 "path": "/tmp/tmp.Gscv29RW8P" 00:24:34.618 } 00:24:34.618 } 00:24:34.618 ] 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "subsystem": "iobuf", 00:24:34.618 "config": [ 00:24:34.618 { 00:24:34.618 "method": "iobuf_set_options", 00:24:34.618 "params": { 00:24:34.618 "small_pool_count": 8192, 00:24:34.618 "large_pool_count": 1024, 00:24:34.618 "small_bufsize": 8192, 00:24:34.618 "large_bufsize": 135168 00:24:34.618 } 00:24:34.618 } 00:24:34.618 ] 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "subsystem": "sock", 00:24:34.618 "config": [ 00:24:34.618 { 00:24:34.618 "method": "sock_set_default_impl", 00:24:34.618 "params": { 00:24:34.618 "impl_name": "posix" 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "sock_impl_set_options", 00:24:34.618 "params": { 00:24:34.618 "impl_name": "ssl", 00:24:34.618 "recv_buf_size": 4096, 00:24:34.618 "send_buf_size": 4096, 00:24:34.618 "enable_recv_pipe": true, 00:24:34.618 "enable_quickack": false, 00:24:34.618 "enable_placement_id": 0, 00:24:34.618 "enable_zerocopy_send_server": true, 00:24:34.618 "enable_zerocopy_send_client": false, 00:24:34.618 "zerocopy_threshold": 0, 00:24:34.618 "tls_version": 0, 00:24:34.618 "enable_ktls": false 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "sock_impl_set_options", 00:24:34.618 "params": { 00:24:34.618 "impl_name": "posix", 00:24:34.618 "recv_buf_size": 2097152, 00:24:34.618 "send_buf_size": 2097152, 00:24:34.618 "enable_recv_pipe": true, 00:24:34.618 "enable_quickack": false, 00:24:34.618 "enable_placement_id": 0, 00:24:34.618 "enable_zerocopy_send_server": true, 00:24:34.618 "enable_zerocopy_send_client": false, 00:24:34.618 "zerocopy_threshold": 0, 00:24:34.618 "tls_version": 0, 00:24:34.618 "enable_ktls": false 00:24:34.618 } 00:24:34.618 } 00:24:34.618 ] 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "subsystem": "vmd", 00:24:34.618 "config": [] 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "subsystem": "accel", 00:24:34.618 "config": [ 00:24:34.618 { 00:24:34.618 "method": "accel_set_options", 00:24:34.618 "params": { 00:24:34.618 "small_cache_size": 128, 00:24:34.618 "large_cache_size": 16, 00:24:34.618 "task_count": 2048, 00:24:34.618 "sequence_count": 2048, 00:24:34.618 "buf_count": 2048 00:24:34.618 } 00:24:34.618 } 00:24:34.618 ] 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "subsystem": "bdev", 00:24:34.618 "config": [ 00:24:34.618 { 00:24:34.618 "method": "bdev_set_options", 00:24:34.618 "params": { 00:24:34.618 "bdev_io_pool_size": 65535, 00:24:34.618 "bdev_io_cache_size": 256, 00:24:34.618 "bdev_auto_examine": true, 00:24:34.618 "iobuf_small_cache_size": 128, 00:24:34.618 "iobuf_large_cache_size": 16 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "bdev_raid_set_options", 00:24:34.618 "params": { 00:24:34.618 "process_window_size_kb": 1024, 00:24:34.618 "process_max_bandwidth_mb_sec": 0 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "bdev_iscsi_set_options", 00:24:34.618 "params": { 00:24:34.618 "timeout_sec": 30 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "bdev_nvme_set_options", 00:24:34.618 "params": { 00:24:34.618 "action_on_timeout": "none", 00:24:34.618 "timeout_us": 0, 00:24:34.618 "timeout_admin_us": 0, 00:24:34.618 "keep_alive_timeout_ms": 10000, 00:24:34.618 "arbitration_burst": 0, 00:24:34.618 "low_priority_weight": 0, 00:24:34.618 "medium_priority_weight": 0, 00:24:34.618 "high_priority_weight": 0, 00:24:34.618 "nvme_adminq_poll_period_us": 10000, 00:24:34.618 "nvme_ioq_poll_period_us": 0, 00:24:34.618 "io_queue_requests": 0, 00:24:34.618 "delay_cmd_submit": true, 00:24:34.618 "transport_retry_count": 4, 00:24:34.618 "bdev_retry_count": 3, 00:24:34.618 "transport_ack_timeout": 0, 00:24:34.618 "ctrlr_loss_timeout_sec": 0, 00:24:34.618 "reconnect_delay_sec": 0, 00:24:34.618 "fast_io_fail_timeout_sec": 0, 00:24:34.618 "disable_auto_failback": false, 00:24:34.618 "generate_uuids": false, 00:24:34.618 "transport_tos": 0, 00:24:34.618 "nvme_error_stat": false, 00:24:34.618 "rdma_srq_size": 0, 00:24:34.618 "io_path_stat": false, 00:24:34.618 "allow_accel_sequence": false, 00:24:34.618 "rdma_max_cq_size": 0, 00:24:34.618 "rdma_cm_event_timeout_ms": 0, 00:24:34.618 "dhchap_digests": [ 00:24:34.618 "sha256", 00:24:34.618 "sha384", 00:24:34.618 "sha512" 00:24:34.618 ], 00:24:34.618 "dhchap_dhgroups": [ 00:24:34.618 "null", 00:24:34.618 "ffdhe2048", 00:24:34.618 "ffdhe3072", 00:24:34.618 "ffdhe4096", 00:24:34.618 "ffdhe6144", 00:24:34.618 "ffdhe8192" 00:24:34.618 ] 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "bdev_nvme_set_hotplug", 00:24:34.618 "params": { 00:24:34.618 "period_us": 100000, 00:24:34.618 "enable": false 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "bdev_malloc_create", 00:24:34.618 "params": { 00:24:34.618 "name": "malloc0", 00:24:34.618 "num_blocks": 8192, 00:24:34.618 "block_size": 4096, 00:24:34.618 "physical_block_size": 4096, 00:24:34.618 "uuid": "8c9941ad-a9e7-4930-82fc-693e38c17187", 00:24:34.618 "optimal_io_boundary": 0, 00:24:34.618 "md_size": 0, 00:24:34.618 "dif_type": 0, 00:24:34.618 "dif_is_head_of_md": false, 00:24:34.618 "dif_pi_format": 0 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "bdev_wait_for_examine" 00:24:34.618 } 00:24:34.618 ] 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "subsystem": "nbd", 00:24:34.618 "config": [] 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "subsystem": "scheduler", 00:24:34.618 "config": [ 00:24:34.618 { 00:24:34.618 "method": "framework_set_scheduler", 00:24:34.618 "params": { 00:24:34.618 "name": "static" 00:24:34.618 } 00:24:34.618 } 00:24:34.618 ] 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "subsystem": "nvmf", 00:24:34.618 "config": [ 00:24:34.618 { 00:24:34.618 "method": "nvmf_set_config", 00:24:34.618 "params": { 00:24:34.618 "discovery_filter": "match_any", 00:24:34.618 "admin_cmd_passthru": { 00:24:34.618 "identify_ctrlr": false 00:24:34.618 }, 00:24:34.618 "dhchap_digests": [ 00:24:34.618 "sha256", 00:24:34.618 "sha384", 00:24:34.618 "sha512" 00:24:34.618 ], 00:24:34.618 "dhchap_dhgroups": [ 00:24:34.618 "null", 00:24:34.618 "ffdhe2048", 00:24:34.618 "ffdhe3072", 00:24:34.618 "ffdhe4096", 00:24:34.618 "ffdhe6144", 00:24:34.618 "ffdhe8192" 00:24:34.618 ] 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "nvmf_set_max_subsystems", 00:24:34.618 "params": { 00:24:34.618 "max_subsystems": 1024 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "nvmf_set_crdt", 00:24:34.618 "params": { 00:24:34.618 "crdt1": 0, 00:24:34.618 "crdt2": 0, 00:24:34.618 "crdt3": 0 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "nvmf_create_transport", 00:24:34.618 "params": { 00:24:34.618 "trtype": "TCP", 00:24:34.618 "max_queue_depth": 128, 00:24:34.618 "max_io_qpairs_per_ctrlr": 127, 00:24:34.618 "in_capsule_data_size": 4096, 00:24:34.618 "max_io_size": 131072, 00:24:34.618 "io_unit_size": 131072, 00:24:34.618 "max_aq_depth": 128, 00:24:34.618 "num_shared_buffers": 511, 00:24:34.618 "buf_cache_size": 4294967295, 00:24:34.618 "dif_insert_or_strip": false, 00:24:34.618 "zcopy": false, 00:24:34.618 "c2h_success": false, 00:24:34.618 "sock_priority": 0, 00:24:34.618 "abort_timeout_sec": 1, 00:24:34.618 "ack_timeout": 0, 00:24:34.618 "data_wr_pool_size": 0 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "nvmf_create_subsystem", 00:24:34.618 "params": { 00:24:34.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.618 "allow_any_host": false, 00:24:34.618 "serial_number": "00000000000000000000", 00:24:34.618 "model_number": "SPDK bdev Controller", 00:24:34.618 "max_namespaces": 32, 00:24:34.618 "min_cntlid": 1, 00:24:34.618 "max_cntlid": 65519, 00:24:34.618 "ana_reporting": false 00:24:34.618 } 00:24:34.618 }, 00:24:34.618 { 00:24:34.618 "method": "nvmf_subsystem_add_host", 00:24:34.618 "params": { 00:24:34.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.618 "host": "nqn.2016-06.io.spdk:host1", 00:24:34.618 "psk": "key0" 00:24:34.618 } 00:24:34.618 }, 00:24:34.619 { 00:24:34.619 "method": "nvmf_subsystem_add_ns", 00:24:34.619 "params": { 00:24:34.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.619 "namespace": { 00:24:34.619 "nsid": 1, 00:24:34.619 "bdev_name": "malloc0", 00:24:34.619 "nguid": "8C9941ADA9E7493082FC693E38C17187", 00:24:34.619 "uuid": "8c9941ad-a9e7-4930-82fc-693e38c17187", 00:24:34.619 "no_auto_visible": false 00:24:34.619 } 00:24:34.619 } 00:24:34.619 }, 00:24:34.619 { 00:24:34.619 "method": "nvmf_subsystem_add_listener", 00:24:34.619 "params": { 00:24:34.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.619 "listen_address": { 00:24:34.619 "trtype": "TCP", 00:24:34.619 "adrfam": "IPv4", 00:24:34.619 "traddr": "10.0.0.2", 00:24:34.619 "trsvcid": "4420" 00:24:34.619 }, 00:24:34.619 "secure_channel": false, 00:24:34.619 "sock_impl": "ssl" 00:24:34.619 } 00:24:34.619 } 00:24:34.619 ] 00:24:34.619 } 00:24:34.619 ] 00:24:34.619 }' 00:24:34.619 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:34.877 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:34.877 "subsystems": [ 00:24:34.877 { 00:24:34.877 "subsystem": "keyring", 00:24:34.877 "config": [ 00:24:34.877 { 00:24:34.877 "method": "keyring_file_add_key", 00:24:34.877 "params": { 00:24:34.877 "name": "key0", 00:24:34.877 "path": "/tmp/tmp.Gscv29RW8P" 00:24:34.877 } 00:24:34.877 } 00:24:34.877 ] 00:24:34.877 }, 00:24:34.877 { 00:24:34.877 "subsystem": "iobuf", 00:24:34.877 "config": [ 00:24:34.877 { 00:24:34.877 "method": "iobuf_set_options", 00:24:34.877 "params": { 00:24:34.877 "small_pool_count": 8192, 00:24:34.877 "large_pool_count": 1024, 00:24:34.877 "small_bufsize": 8192, 00:24:34.877 "large_bufsize": 135168 00:24:34.877 } 00:24:34.877 } 00:24:34.877 ] 00:24:34.877 }, 00:24:34.877 { 00:24:34.877 "subsystem": "sock", 00:24:34.877 "config": [ 00:24:34.877 { 00:24:34.877 "method": "sock_set_default_impl", 00:24:34.877 "params": { 00:24:34.877 "impl_name": "posix" 00:24:34.877 } 00:24:34.877 }, 00:24:34.877 { 00:24:34.877 "method": "sock_impl_set_options", 00:24:34.877 "params": { 00:24:34.877 "impl_name": "ssl", 00:24:34.877 "recv_buf_size": 4096, 00:24:34.877 "send_buf_size": 4096, 00:24:34.877 "enable_recv_pipe": true, 00:24:34.877 "enable_quickack": false, 00:24:34.877 "enable_placement_id": 0, 00:24:34.877 "enable_zerocopy_send_server": true, 00:24:34.877 "enable_zerocopy_send_client": false, 00:24:34.877 "zerocopy_threshold": 0, 00:24:34.877 "tls_version": 0, 00:24:34.877 "enable_ktls": false 00:24:34.877 } 00:24:34.877 }, 00:24:34.877 { 00:24:34.877 "method": "sock_impl_set_options", 00:24:34.877 "params": { 00:24:34.877 "impl_name": "posix", 00:24:34.877 "recv_buf_size": 2097152, 00:24:34.877 "send_buf_size": 2097152, 00:24:34.877 "enable_recv_pipe": true, 00:24:34.877 "enable_quickack": false, 00:24:34.877 "enable_placement_id": 0, 00:24:34.878 "enable_zerocopy_send_server": true, 00:24:34.878 "enable_zerocopy_send_client": false, 00:24:34.878 "zerocopy_threshold": 0, 00:24:34.878 "tls_version": 0, 00:24:34.878 "enable_ktls": false 00:24:34.878 } 00:24:34.878 } 00:24:34.878 ] 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "subsystem": "vmd", 00:24:34.878 "config": [] 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "subsystem": "accel", 00:24:34.878 "config": [ 00:24:34.878 { 00:24:34.878 "method": "accel_set_options", 00:24:34.878 "params": { 00:24:34.878 "small_cache_size": 128, 00:24:34.878 "large_cache_size": 16, 00:24:34.878 "task_count": 2048, 00:24:34.878 "sequence_count": 2048, 00:24:34.878 "buf_count": 2048 00:24:34.878 } 00:24:34.878 } 00:24:34.878 ] 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "subsystem": "bdev", 00:24:34.878 "config": [ 00:24:34.878 { 00:24:34.878 "method": "bdev_set_options", 00:24:34.878 "params": { 00:24:34.878 "bdev_io_pool_size": 65535, 00:24:34.878 "bdev_io_cache_size": 256, 00:24:34.878 "bdev_auto_examine": true, 00:24:34.878 "iobuf_small_cache_size": 128, 00:24:34.878 "iobuf_large_cache_size": 16 00:24:34.878 } 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "method": "bdev_raid_set_options", 00:24:34.878 "params": { 00:24:34.878 "process_window_size_kb": 1024, 00:24:34.878 "process_max_bandwidth_mb_sec": 0 00:24:34.878 } 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "method": "bdev_iscsi_set_options", 00:24:34.878 "params": { 00:24:34.878 "timeout_sec": 30 00:24:34.878 } 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "method": "bdev_nvme_set_options", 00:24:34.878 "params": { 00:24:34.878 "action_on_timeout": "none", 00:24:34.878 "timeout_us": 0, 00:24:34.878 "timeout_admin_us": 0, 00:24:34.878 "keep_alive_timeout_ms": 10000, 00:24:34.878 "arbitration_burst": 0, 00:24:34.878 "low_priority_weight": 0, 00:24:34.878 "medium_priority_weight": 0, 00:24:34.878 "high_priority_weight": 0, 00:24:34.878 "nvme_adminq_poll_period_us": 10000, 00:24:34.878 "nvme_ioq_poll_period_us": 0, 00:24:34.878 "io_queue_requests": 512, 00:24:34.878 "delay_cmd_submit": true, 00:24:34.878 "transport_retry_count": 4, 00:24:34.878 "bdev_retry_count": 3, 00:24:34.878 "transport_ack_timeout": 0, 00:24:34.878 "ctrlr_loss_timeout_sec": 0, 00:24:34.878 "reconnect_delay_sec": 0, 00:24:34.878 "fast_io_fail_timeout_sec": 0, 00:24:34.878 "disable_auto_failback": false, 00:24:34.878 "generate_uuids": false, 00:24:34.878 "transport_tos": 0, 00:24:34.878 "nvme_error_stat": false, 00:24:34.878 "rdma_srq_size": 0, 00:24:34.878 "io_path_stat": false, 00:24:34.878 "allow_accel_sequence": false, 00:24:34.878 "rdma_max_cq_size": 0, 00:24:34.878 "rdma_cm_event_timeout_ms": 0, 00:24:34.878 "dhchap_digests": [ 00:24:34.878 "sha256", 00:24:34.878 "sha384", 00:24:34.878 "sha512" 00:24:34.878 ], 00:24:34.878 "dhchap_dhgroups": [ 00:24:34.878 "null", 00:24:34.878 "ffdhe2048", 00:24:34.878 "ffdhe3072", 00:24:34.878 "ffdhe4096", 00:24:34.878 "ffdhe6144", 00:24:34.878 "ffdhe8192" 00:24:34.878 ] 00:24:34.878 } 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "method": "bdev_nvme_attach_controller", 00:24:34.878 "params": { 00:24:34.878 "name": "nvme0", 00:24:34.878 "trtype": "TCP", 00:24:34.878 "adrfam": "IPv4", 00:24:34.878 "traddr": "10.0.0.2", 00:24:34.878 "trsvcid": "4420", 00:24:34.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.878 "prchk_reftag": false, 00:24:34.878 "prchk_guard": false, 00:24:34.878 "ctrlr_loss_timeout_sec": 0, 00:24:34.878 "reconnect_delay_sec": 0, 00:24:34.878 "fast_io_fail_timeout_sec": 0, 00:24:34.878 "psk": "key0", 00:24:34.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.878 "hdgst": false, 00:24:34.878 "ddgst": false, 00:24:34.878 "multipath": "multipath" 00:24:34.878 } 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "method": "bdev_nvme_set_hotplug", 00:24:34.878 "params": { 00:24:34.878 "period_us": 100000, 00:24:34.878 "enable": false 00:24:34.878 } 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "method": "bdev_enable_histogram", 00:24:34.878 "params": { 00:24:34.878 "name": "nvme0n1", 00:24:34.878 "enable": true 00:24:34.878 } 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "method": "bdev_wait_for_examine" 00:24:34.878 } 00:24:34.878 ] 00:24:34.878 }, 00:24:34.878 { 00:24:34.878 "subsystem": "nbd", 00:24:34.878 "config": [] 00:24:34.878 } 00:24:34.878 ] 00:24:34.878 }' 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1643018 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1643018 ']' 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1643018 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1643018 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1643018' 00:24:34.878 killing process with pid 1643018 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1643018 00:24:34.878 Received shutdown signal, test time was about 1.000000 seconds 00:24:34.878 00:24:34.878 Latency(us) 00:24:34.878 [2024-10-12T23:35:20.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.878 [2024-10-12T23:35:20.456Z] =================================================================================================================== 00:24:34.878 [2024-10-12T23:35:20.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.878 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1643018 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1642879 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1642879 ']' 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1642879 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1642879 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1642879' 00:24:35.136 killing process with pid 1642879 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1642879 00:24:35.136 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1642879 00:24:35.395 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:35.395 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:35.395 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:35.395 "subsystems": [ 00:24:35.395 { 00:24:35.395 "subsystem": "keyring", 00:24:35.395 "config": [ 00:24:35.395 { 00:24:35.395 "method": "keyring_file_add_key", 00:24:35.395 "params": { 00:24:35.395 "name": "key0", 00:24:35.395 "path": "/tmp/tmp.Gscv29RW8P" 00:24:35.395 } 00:24:35.395 } 00:24:35.395 ] 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "subsystem": "iobuf", 00:24:35.395 "config": [ 00:24:35.395 { 00:24:35.395 "method": "iobuf_set_options", 00:24:35.395 "params": { 00:24:35.395 "small_pool_count": 8192, 00:24:35.395 "large_pool_count": 1024, 00:24:35.395 "small_bufsize": 8192, 00:24:35.395 "large_bufsize": 135168 00:24:35.395 } 00:24:35.395 } 00:24:35.395 ] 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "subsystem": "sock", 00:24:35.395 "config": [ 00:24:35.395 { 00:24:35.395 "method": "sock_set_default_impl", 00:24:35.395 "params": { 00:24:35.395 "impl_name": "posix" 00:24:35.395 } 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "method": "sock_impl_set_options", 00:24:35.395 "params": { 00:24:35.395 "impl_name": "ssl", 00:24:35.395 "recv_buf_size": 4096, 00:24:35.395 "send_buf_size": 4096, 00:24:35.395 "enable_recv_pipe": true, 00:24:35.395 "enable_quickack": false, 00:24:35.395 "enable_placement_id": 0, 00:24:35.395 "enable_zerocopy_send_server": true, 00:24:35.395 "enable_zerocopy_send_client": false, 00:24:35.395 "zerocopy_threshold": 0, 00:24:35.395 "tls_version": 0, 00:24:35.395 "enable_ktls": false 00:24:35.395 } 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "method": "sock_impl_set_options", 00:24:35.395 "params": { 00:24:35.395 "impl_name": "posix", 00:24:35.395 "recv_buf_size": 2097152, 00:24:35.395 "send_buf_size": 2097152, 00:24:35.395 "enable_recv_pipe": true, 00:24:35.395 "enable_quickack": false, 00:24:35.395 "enable_placement_id": 0, 00:24:35.395 "enable_zerocopy_send_server": true, 00:24:35.395 "enable_zerocopy_send_client": false, 00:24:35.395 "zerocopy_threshold": 0, 00:24:35.395 "tls_version": 0, 00:24:35.395 "enable_ktls": false 00:24:35.395 } 00:24:35.395 } 00:24:35.395 ] 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "subsystem": "vmd", 00:24:35.395 "config": [] 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "subsystem": "accel", 00:24:35.395 "config": [ 00:24:35.395 { 00:24:35.395 "method": "accel_set_options", 00:24:35.395 "params": { 00:24:35.395 "small_cache_size": 128, 00:24:35.395 "large_cache_size": 16, 00:24:35.395 "task_count": 2048, 00:24:35.395 "sequence_count": 2048, 00:24:35.395 "buf_count": 2048 00:24:35.395 } 00:24:35.395 } 00:24:35.395 ] 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "subsystem": "bdev", 00:24:35.395 "config": [ 00:24:35.395 { 00:24:35.395 "method": "bdev_set_options", 00:24:35.395 "params": { 00:24:35.395 "bdev_io_pool_size": 65535, 00:24:35.395 "bdev_io_cache_size": 256, 00:24:35.395 "bdev_auto_examine": true, 00:24:35.395 "iobuf_small_cache_size": 128, 00:24:35.395 "iobuf_large_cache_size": 16 00:24:35.395 } 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "method": "bdev_raid_set_options", 00:24:35.395 "params": { 00:24:35.395 "process_window_size_kb": 1024, 00:24:35.395 "process_max_bandwidth_mb_sec": 0 00:24:35.395 } 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "method": "bdev_iscsi_set_options", 00:24:35.395 "params": { 00:24:35.395 "timeout_sec": 30 00:24:35.395 } 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "method": "bdev_nvme_set_options", 00:24:35.395 "params": { 00:24:35.395 "action_on_timeout": "none", 00:24:35.395 "timeout_us": 0, 00:24:35.395 "timeout_admin_us": 0, 00:24:35.395 "keep_alive_timeout_ms": 10000, 00:24:35.395 "arbitration_burst": 0, 00:24:35.395 "low_priority_weight": 0, 00:24:35.395 "medium_priority_weight": 0, 00:24:35.395 "high_priority_weight": 0, 00:24:35.395 "nvme_adminq_poll_period_us": 10000, 00:24:35.395 "nvme_ioq_poll_period_us": 0, 00:24:35.395 "io_queue_requests": 0, 00:24:35.395 "delay_cmd_submit": true, 00:24:35.395 "transport_retry_count": 4, 00:24:35.395 "bdev_retry_count": 3, 00:24:35.395 "transport_ack_timeout": 0, 00:24:35.395 "ctrlr_loss_timeout_sec": 0, 00:24:35.395 "reconnect_delay_sec": 0, 00:24:35.395 "fast_io_fail_timeout_sec": 0, 00:24:35.395 "disable_auto_failback": false, 00:24:35.395 "generate_uuids": false, 00:24:35.395 "transport_tos": 0, 00:24:35.395 "nvme_error_stat": false, 00:24:35.395 "rdma_srq_size": 0, 00:24:35.395 "io_path_stat": false, 00:24:35.395 "allow_accel_sequence": false, 00:24:35.395 "rdma_max_cq_size": 0, 00:24:35.395 "rdma_cm_event_timeout_ms": 0, 00:24:35.395 "dhchap_digests": [ 00:24:35.395 "sha256", 00:24:35.395 "sha384", 00:24:35.395 "sha512" 00:24:35.395 ], 00:24:35.395 "dhchap_dhgroups": [ 00:24:35.395 "null", 00:24:35.395 "ffdhe2048", 00:24:35.395 "ffdhe3072", 00:24:35.395 "ffdhe4096", 00:24:35.395 "ffdhe6144", 00:24:35.395 "ffdhe8192" 00:24:35.395 ] 00:24:35.395 } 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "method": "bdev_nvme_set_hotplug", 00:24:35.395 "params": { 00:24:35.395 "period_us": 100000, 00:24:35.395 "enable": false 00:24:35.395 } 00:24:35.395 }, 00:24:35.395 { 00:24:35.395 "method": "bdev_malloc_create", 00:24:35.395 "params": { 00:24:35.395 "name": "malloc0", 00:24:35.395 "num_blocks": 8192, 00:24:35.395 "block_size": 4096, 00:24:35.395 "physical_block_size": 4096, 00:24:35.395 "uuid": "8c9941ad-a9e7-4930-82fc-693e38c17187", 00:24:35.395 "optimal_io_boundary": 0, 00:24:35.395 "md_size": 0, 00:24:35.395 "dif_type": 0, 00:24:35.395 "dif_is_head_of_md": false, 00:24:35.395 "dif_pi_format": 0 00:24:35.395 } 00:24:35.395 }, 00:24:35.395 { 00:24:35.396 "method": "bdev_wait_for_examine" 00:24:35.396 } 00:24:35.396 ] 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "subsystem": "nbd", 00:24:35.396 "config": [] 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "subsystem": "scheduler", 00:24:35.396 "config": [ 00:24:35.396 { 00:24:35.396 "method": "framework_set_scheduler", 00:24:35.396 "params": { 00:24:35.396 "name": "static" 00:24:35.396 } 00:24:35.396 } 00:24:35.396 ] 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "subsystem": "nvmf", 00:24:35.396 "config": [ 00:24:35.396 { 00:24:35.396 "method": "nvmf_set_config", 00:24:35.396 "params": { 00:24:35.396 "discovery_filter": "match_any", 00:24:35.396 "admin_cmd_passthru": { 00:24:35.396 "identify_ctrlr": false 00:24:35.396 }, 00:24:35.396 "dhchap_digests": [ 00:24:35.396 "sha256", 00:24:35.396 "sha384", 00:24:35.396 "sha512" 00:24:35.396 ], 00:24:35.396 "dhchap_dhgroups": [ 00:24:35.396 "null", 00:24:35.396 "ffdhe2048", 00:24:35.396 "ffdhe3072", 00:24:35.396 "ffdhe4096", 00:24:35.396 "ffdhe6144", 00:24:35.396 "ffdhe8192" 00:24:35.396 ] 00:24:35.396 } 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "method": "nvmf_set_max_subsystems", 00:24:35.396 "params": { 00:24:35.396 "max_subsystems": 1024 00:24:35.396 } 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "method": "nvmf_set_crdt", 00:24:35.396 "params": { 00:24:35.396 "crdt1": 0, 00:24:35.396 "crdt2": 0, 00:24:35.396 "crdt3": 0 00:24:35.396 } 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "method": "nvmf_create_transport", 00:24:35.396 "params": { 00:24:35.396 "trtype": "TCP", 00:24:35.396 "max_queue_depth": 128, 00:24:35.396 "max_io_qpairs_per_ctrlr": 127, 00:24:35.396 "in_capsule_data_size": 4096, 00:24:35.396 "max_io_size": 131072, 00:24:35.396 "io_unit_size": 131072, 00:24:35.396 "max_aq_depth": 128, 00:24:35.396 "num_shared_buffers": 511, 00:24:35.396 "buf_cache_size": 4294967295, 00:24:35.396 "dif_insert_or_strip": false, 00:24:35.396 "zcopy": false, 00:24:35.396 "c2h_success": false, 00:24:35.396 "sock_priority": 0, 00:24:35.396 "abort_timeout_sec": 1, 00:24:35.396 "ack_timeout": 0, 00:24:35.396 "data_wr_pool_size": 0 00:24:35.396 } 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "method": "nvmf_create_subsystem", 00:24:35.396 "params": { 00:24:35.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.396 "allow_any_host": false, 00:24:35.396 "serial_number": "00000000000000000000", 00:24:35.396 "model_number": "SPDK bdev Controller", 00:24:35.396 "max_namespaces": 32, 00:24:35.396 "min_cntlid": 1, 00:24:35.396 "max_cntlid": 65519, 00:24:35.396 "ana_reporting": false 00:24:35.396 } 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "method": "nvmf_subsystem_add_host", 00:24:35.396 "params": { 00:24:35.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.396 "host": "nqn.2016-06.io.spdk:host1", 00:24:35.396 "psk": "key0" 00:24:35.396 } 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "method": "nvmf_subsystem_add_ns", 00:24:35.396 "params": { 00:24:35.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.396 "namespace": { 00:24:35.396 "nsid": 1, 00:24:35.396 "bdev_name": "malloc0", 00:24:35.396 "nguid": "8C9941ADA9E7493082FC693E38C17187", 00:24:35.396 "uuid": "8c9941ad-a9e7-4930-82fc-693e38c17187", 00:24:35.396 "no_auto_visible": false 00:24:35.396 } 00:24:35.396 } 00:24:35.396 }, 00:24:35.396 { 00:24:35.396 "method": "nvmf_subsystem_add_listener", 00:24:35.396 "params": { 00:24:35.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.396 "listen_address": { 00:24:35.396 "trtype": "TCP", 00:24:35.396 "adrfam": "IPv4", 00:24:35.396 "traddr": "10.0.0.2", 00:24:35.396 "trsvcid": "4420" 00:24:35.396 }, 00:24:35.396 "secure_channel": false, 00:24:35.396 "sock_impl": "ssl" 00:24:35.396 } 00:24:35.396 } 00:24:35.396 ] 00:24:35.396 } 00:24:35.396 ] 00:24:35.396 }' 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1643312 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1643312 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1643312 ']' 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.396 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.396 [2024-10-13 01:35:20.787016] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:35.396 [2024-10-13 01:35:20.787093] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.396 [2024-10-13 01:35:20.853139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.396 [2024-10-13 01:35:20.900965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.396 [2024-10-13 01:35:20.901025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.396 [2024-10-13 01:35:20.901049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.396 [2024-10-13 01:35:20.901063] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.396 [2024-10-13 01:35:20.901074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.396 [2024-10-13 01:35:20.901796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.654 [2024-10-13 01:35:21.147745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.654 [2024-10-13 01:35:21.179755] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.654 [2024-10-13 01:35:21.180055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.654 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.654 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:35.655 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:35.655 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:35.655 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.913 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.913 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1643453 00:24:35.913 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1643453 /var/tmp/bdevperf.sock 00:24:35.913 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1643453 ']' 00:24:35.913 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:35.913 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.913 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:35.913 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:35.913 "subsystems": [ 00:24:35.913 { 00:24:35.913 "subsystem": "keyring", 00:24:35.913 "config": [ 00:24:35.913 { 00:24:35.913 "method": "keyring_file_add_key", 00:24:35.913 "params": { 00:24:35.913 "name": "key0", 00:24:35.913 "path": "/tmp/tmp.Gscv29RW8P" 00:24:35.913 } 00:24:35.913 } 00:24:35.913 ] 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "subsystem": "iobuf", 00:24:35.913 "config": [ 00:24:35.913 { 00:24:35.913 "method": "iobuf_set_options", 00:24:35.913 "params": { 00:24:35.913 "small_pool_count": 8192, 00:24:35.913 "large_pool_count": 1024, 00:24:35.913 "small_bufsize": 8192, 00:24:35.913 "large_bufsize": 135168 00:24:35.913 } 00:24:35.913 } 00:24:35.913 ] 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "subsystem": "sock", 00:24:35.913 "config": [ 00:24:35.913 { 00:24:35.913 "method": "sock_set_default_impl", 00:24:35.913 "params": { 00:24:35.913 "impl_name": "posix" 00:24:35.913 } 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "method": "sock_impl_set_options", 00:24:35.913 "params": { 00:24:35.913 "impl_name": "ssl", 00:24:35.913 "recv_buf_size": 4096, 00:24:35.913 "send_buf_size": 4096, 00:24:35.913 "enable_recv_pipe": true, 00:24:35.913 "enable_quickack": false, 00:24:35.913 "enable_placement_id": 0, 00:24:35.913 "enable_zerocopy_send_server": true, 00:24:35.913 "enable_zerocopy_send_client": false, 00:24:35.913 "zerocopy_threshold": 0, 00:24:35.913 "tls_version": 0, 00:24:35.913 "enable_ktls": false 00:24:35.913 } 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "method": "sock_impl_set_options", 00:24:35.913 "params": { 00:24:35.913 "impl_name": "posix", 00:24:35.913 "recv_buf_size": 2097152, 00:24:35.913 "send_buf_size": 2097152, 00:24:35.913 "enable_recv_pipe": true, 00:24:35.913 "enable_quickack": false, 00:24:35.913 "enable_placement_id": 0, 00:24:35.913 "enable_zerocopy_send_server": true, 00:24:35.913 "enable_zerocopy_send_client": false, 00:24:35.913 "zerocopy_threshold": 0, 00:24:35.913 "tls_version": 0, 00:24:35.913 "enable_ktls": false 00:24:35.913 } 00:24:35.913 } 00:24:35.913 ] 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "subsystem": "vmd", 00:24:35.913 "config": [] 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "subsystem": "accel", 00:24:35.913 "config": [ 00:24:35.913 { 00:24:35.913 "method": "accel_set_options", 00:24:35.913 "params": { 00:24:35.913 "small_cache_size": 128, 00:24:35.913 "large_cache_size": 16, 00:24:35.913 "task_count": 2048, 00:24:35.913 "sequence_count": 2048, 00:24:35.913 "buf_count": 2048 00:24:35.913 } 00:24:35.913 } 00:24:35.913 ] 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "subsystem": "bdev", 00:24:35.913 "config": [ 00:24:35.913 { 00:24:35.913 "method": "bdev_set_options", 00:24:35.913 "params": { 00:24:35.913 "bdev_io_pool_size": 65535, 00:24:35.913 "bdev_io_cache_size": 256, 00:24:35.913 "bdev_auto_examine": true, 00:24:35.913 "iobuf_small_cache_size": 128, 00:24:35.913 "iobuf_large_cache_size": 16 00:24:35.913 } 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "method": "bdev_raid_set_options", 00:24:35.913 "params": { 00:24:35.913 "process_window_size_kb": 1024, 00:24:35.913 "process_max_bandwidth_mb_sec": 0 00:24:35.913 } 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "method": "bdev_iscsi_set_options", 00:24:35.913 "params": { 00:24:35.913 "timeout_sec": 30 00:24:35.913 } 00:24:35.913 }, 00:24:35.913 { 00:24:35.913 "method": "bdev_nvme_set_options", 00:24:35.913 "params": { 00:24:35.913 "action_on_timeout": "none", 00:24:35.913 "timeout_us": 0, 00:24:35.913 "timeout_admin_us": 0, 00:24:35.913 "keep_alive_timeout_ms": 10000, 00:24:35.913 "arbitration_burst": 0, 00:24:35.913 "low_priority_weight": 0, 00:24:35.913 "medium_priority_weight": 0, 00:24:35.913 "high_priority_weight": 0, 00:24:35.913 "nvme_adminq_poll_period_us": 10000, 00:24:35.913 "nvme_ioq_poll_period_us": 0, 00:24:35.913 "io_queue_requests": 512, 00:24:35.913 "delay_cmd_submit": true, 00:24:35.913 "transport_retry_count": 4, 00:24:35.913 "bdev_retry_count": 3, 00:24:35.913 "transport_ack_timeout": 0, 00:24:35.913 "ctrlr_loss_timeout_sec": 0, 00:24:35.913 "reconnect_delay_sec": 0, 00:24:35.913 "fast_io_fail_timeout_sec": 0, 00:24:35.913 "disable_auto_failback": false, 00:24:35.913 "generate_uuids": false, 00:24:35.913 "transport_tos": 0, 00:24:35.913 "nvme_error_stat": false, 00:24:35.913 "rdma_srq_size": 0, 00:24:35.913 "io_path_stat": false, 00:24:35.913 "allow_accel_sequence": false, 00:24:35.913 "rdma_max_cq_size": 0, 00:24:35.913 "rdma_cm_event_timeout_ms": 0, 00:24:35.913 "dhchap_digests": [ 00:24:35.913 "sha256", 00:24:35.913 "sha384", 00:24:35.914 "sha512" 00:24:35.914 ], 00:24:35.914 "dhchap_dhgroups": [ 00:24:35.914 "null", 00:24:35.914 "ffdhe2048", 00:24:35.914 "ffdhe3072", 00:24:35.914 "ffdhe4096", 00:24:35.914 "ffdhe6144", 00:24:35.914 "ffdhe8192" 00:24:35.914 ] 00:24:35.914 } 00:24:35.914 }, 00:24:35.914 { 00:24:35.914 "method": "bdev_nvme_attach_controller", 00:24:35.914 "params": { 00:24:35.914 "name": "nvme0", 00:24:35.914 "trtype": "TCP", 00:24:35.914 "adrfam": "IPv4", 00:24:35.914 "traddr": "10.0.0.2", 00:24:35.914 "trsvcid": "4420", 00:24:35.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.914 "prchk_reftag": false, 00:24:35.914 "prchk_guard": false, 00:24:35.914 "ctrlr_loss_timeout_sec": 0, 00:24:35.914 "reconnect_delay_sec": 0, 00:24:35.914 "fast_io_fail_timeout_sec": 0, 00:24:35.914 "psk": "key0", 00:24:35.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:35.914 "hdgst": false, 00:24:35.914 "ddgst": false, 00:24:35.914 "multipath": "multipath" 00:24:35.914 } 00:24:35.914 }, 00:24:35.914 { 00:24:35.914 "method": "bdev_nvme_set_hotplug", 00:24:35.914 "params": { 00:24:35.914 "period_us": 100000, 00:24:35.914 "enable": false 00:24:35.914 } 00:24:35.914 }, 00:24:35.914 { 00:24:35.914 "method": "bdev_enable_histogram", 00:24:35.914 "params": { 00:24:35.914 "name": "nvme0n1", 00:24:35.914 "enable": true 00:24:35.914 } 00:24:35.914 }, 00:24:35.914 { 00:24:35.914 "method": "bdev_wait_for_examine" 00:24:35.914 } 00:24:35.914 ] 00:24:35.914 }, 00:24:35.914 { 00:24:35.914 "subsystem": "nbd", 00:24:35.914 "config": [] 00:24:35.914 } 00:24:35.914 ] 00:24:35.914 }' 00:24:35.914 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.914 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:35.914 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:35.914 [2024-10-13 01:35:21.281420] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:35.914 [2024-10-13 01:35:21.281520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1643453 ] 00:24:35.914 [2024-10-13 01:35:21.344689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.914 [2024-10-13 01:35:21.394008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.172 [2024-10-13 01:35:21.575422] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:36.172 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:36.172 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:24:36.172 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.172 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:36.430 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.430 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:36.687 Running I/O for 1 seconds... 00:24:37.620 2920.00 IOPS, 11.41 MiB/s 00:24:37.620 Latency(us) 00:24:37.620 [2024-10-12T23:35:23.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.620 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:37.620 Verification LBA range: start 0x0 length 0x2000 00:24:37.620 nvme0n1 : 1.03 2950.30 11.52 0.00 0.00 42739.64 6844.87 36894.34 00:24:37.620 [2024-10-12T23:35:23.199Z] =================================================================================================================== 00:24:37.621 [2024-10-12T23:35:23.199Z] Total : 2950.30 11.52 0.00 0.00 42739.64 6844.87 36894.34 00:24:37.621 { 00:24:37.621 "results": [ 00:24:37.621 { 00:24:37.621 "job": "nvme0n1", 00:24:37.621 "core_mask": "0x2", 00:24:37.621 "workload": "verify", 00:24:37.621 "status": "finished", 00:24:37.621 "verify_range": { 00:24:37.621 "start": 0, 00:24:37.621 "length": 8192 00:24:37.621 }, 00:24:37.621 "queue_depth": 128, 00:24:37.621 "io_size": 4096, 00:24:37.621 "runtime": 1.033454, 00:24:37.621 "iops": 2950.3006423120914, 00:24:37.621 "mibps": 11.524611884031607, 00:24:37.621 "io_failed": 0, 00:24:37.621 "io_timeout": 0, 00:24:37.621 "avg_latency_us": 42739.64117342663, 00:24:37.621 "min_latency_us": 6844.8711111111115, 00:24:37.621 "max_latency_us": 36894.34074074074 00:24:37.621 } 00:24:37.621 ], 00:24:37.621 "core_count": 1 00:24:37.621 } 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:37.621 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:37.621 nvmf_trace.0 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1643453 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1643453 ']' 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1643453 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1643453 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1643453' 00:24:37.879 killing process with pid 1643453 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1643453 00:24:37.879 Received shutdown signal, test time was about 1.000000 seconds 00:24:37.879 00:24:37.879 Latency(us) 00:24:37.879 [2024-10-12T23:35:23.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.879 [2024-10-12T23:35:23.457Z] =================================================================================================================== 00:24:37.879 [2024-10-12T23:35:23.457Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1643453 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.879 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.879 rmmod nvme_tcp 00:24:38.137 rmmod nvme_fabrics 00:24:38.137 rmmod nvme_keyring 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1643312 ']' 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1643312 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1643312 ']' 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1643312 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1643312 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1643312' 00:24:38.137 killing process with pid 1643312 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1643312 00:24:38.137 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1643312 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.395 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.299 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:40.299 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hyfX8lPvXp /tmp/tmp.mvSeyxAdvl /tmp/tmp.Gscv29RW8P 00:24:40.299 00:24:40.299 real 1m21.765s 00:24:40.299 user 2m16.700s 00:24:40.299 sys 0m25.092s 00:24:40.299 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:40.299 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.299 ************************************ 00:24:40.299 END TEST nvmf_tls 00:24:40.299 ************************************ 00:24:40.299 01:35:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:40.299 01:35:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:40.299 01:35:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.299 01:35:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:40.299 ************************************ 00:24:40.299 START TEST nvmf_fips 00:24:40.299 ************************************ 00:24:40.299 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:40.558 * Looking for test storage... 00:24:40.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.558 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:40.559 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:40.559 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.559 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.559 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:40.559 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:40.559 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.559 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.559 --rc genhtml_branch_coverage=1 00:24:40.559 --rc genhtml_function_coverage=1 00:24:40.559 --rc genhtml_legend=1 00:24:40.559 --rc geninfo_all_blocks=1 00:24:40.559 --rc geninfo_unexecuted_blocks=1 00:24:40.559 00:24:40.559 ' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.559 --rc genhtml_branch_coverage=1 00:24:40.559 --rc genhtml_function_coverage=1 00:24:40.559 --rc genhtml_legend=1 00:24:40.559 --rc geninfo_all_blocks=1 00:24:40.559 --rc geninfo_unexecuted_blocks=1 00:24:40.559 00:24:40.559 ' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.559 --rc genhtml_branch_coverage=1 00:24:40.559 --rc genhtml_function_coverage=1 00:24:40.559 --rc genhtml_legend=1 00:24:40.559 --rc geninfo_all_blocks=1 00:24:40.559 --rc geninfo_unexecuted_blocks=1 00:24:40.559 00:24:40.559 ' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:40.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.559 --rc genhtml_branch_coverage=1 00:24:40.559 --rc genhtml_function_coverage=1 00:24:40.559 --rc genhtml_legend=1 00:24:40.559 --rc geninfo_all_blocks=1 00:24:40.559 --rc geninfo_unexecuted_blocks=1 00:24:40.559 00:24:40.559 ' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:40.559 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:24:40.560 Error setting digest 00:24:40.560 40C20908397F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:40.560 40C20908397F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.560 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:42.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:42.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:42.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:42.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:42.461 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:42.461 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.461 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:42.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:24:42.720 00:24:42.720 --- 10.0.0.2 ping statistics --- 00:24:42.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.720 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:24:42.720 00:24:42.720 --- 10.0.0.1 ping statistics --- 00:24:42.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.720 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1645692 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1645692 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1645692 ']' 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.720 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.720 [2024-10-13 01:35:28.226613] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:42.720 [2024-10-13 01:35:28.226705] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.720 [2024-10-13 01:35:28.291058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.978 [2024-10-13 01:35:28.338222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.978 [2024-10-13 01:35:28.338290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.978 [2024-10-13 01:35:28.338304] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.978 [2024-10-13 01:35:28.338315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.978 [2024-10-13 01:35:28.338325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.978 [2024-10-13 01:35:28.338962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.4a7 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.4a7 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.4a7 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.4a7 00:24:42.978 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:43.236 [2024-10-13 01:35:28.791201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.236 [2024-10-13 01:35:28.807219] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:43.236 [2024-10-13 01:35:28.807461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.494 malloc0 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1645718 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1645718 /var/tmp/bdevperf.sock 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1645718 ']' 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.494 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:43.494 [2024-10-13 01:35:28.941655] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:43.494 [2024-10-13 01:35:28.941739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645718 ] 00:24:43.494 [2024-10-13 01:35:29.001595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.494 [2024-10-13 01:35:29.048415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.752 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.752 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:24:43.752 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.4a7 00:24:44.009 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:44.267 [2024-10-13 01:35:29.666100] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.267 TLSTESTn1 00:24:44.267 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:44.524 Running I/O for 10 seconds... 00:24:46.390 3436.00 IOPS, 13.42 MiB/s [2024-10-12T23:35:32.900Z] 3379.50 IOPS, 13.20 MiB/s [2024-10-12T23:35:34.272Z] 3375.00 IOPS, 13.18 MiB/s [2024-10-12T23:35:35.203Z] 3400.50 IOPS, 13.28 MiB/s [2024-10-12T23:35:36.135Z] 3369.40 IOPS, 13.16 MiB/s [2024-10-12T23:35:37.068Z] 3380.33 IOPS, 13.20 MiB/s [2024-10-12T23:35:38.000Z] 3405.71 IOPS, 13.30 MiB/s [2024-10-12T23:35:38.932Z] 3414.12 IOPS, 13.34 MiB/s [2024-10-12T23:35:40.305Z] 3430.89 IOPS, 13.40 MiB/s [2024-10-12T23:35:40.305Z] 3444.50 IOPS, 13.46 MiB/s 00:24:54.727 Latency(us) 00:24:54.727 [2024-10-12T23:35:40.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.727 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:54.727 Verification LBA range: start 0x0 length 0x2000 00:24:54.727 TLSTESTn1 : 10.02 3450.25 13.48 0.00 0.00 37038.03 7087.60 33010.73 00:24:54.727 [2024-10-12T23:35:40.305Z] =================================================================================================================== 00:24:54.727 [2024-10-12T23:35:40.305Z] Total : 3450.25 13.48 0.00 0.00 37038.03 7087.60 33010.73 00:24:54.727 { 00:24:54.727 "results": [ 00:24:54.727 { 00:24:54.727 "job": "TLSTESTn1", 00:24:54.727 "core_mask": "0x4", 00:24:54.727 "workload": "verify", 00:24:54.727 "status": "finished", 00:24:54.727 "verify_range": { 00:24:54.727 "start": 0, 00:24:54.727 "length": 8192 00:24:54.727 }, 00:24:54.727 "queue_depth": 128, 00:24:54.727 "io_size": 4096, 00:24:54.727 "runtime": 10.019853, 00:24:54.727 "iops": 3450.2502182417247, 00:24:54.727 "mibps": 13.477539915006737, 00:24:54.727 "io_failed": 0, 00:24:54.727 "io_timeout": 0, 00:24:54.727 "avg_latency_us": 37038.03005156323, 00:24:54.727 "min_latency_us": 7087.597037037037, 00:24:54.727 "max_latency_us": 33010.72592592592 00:24:54.727 } 00:24:54.727 ], 00:24:54.727 "core_count": 1 00:24:54.727 } 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:54.727 nvmf_trace.0 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1645718 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1645718 ']' 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1645718 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.727 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1645718 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1645718' 00:24:54.727 killing process with pid 1645718 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1645718 00:24:54.727 Received shutdown signal, test time was about 10.000000 seconds 00:24:54.727 00:24:54.727 Latency(us) 00:24:54.727 [2024-10-12T23:35:40.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.727 [2024-10-12T23:35:40.305Z] =================================================================================================================== 00:24:54.727 [2024-10-12T23:35:40.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1645718 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.727 rmmod nvme_tcp 00:24:54.727 rmmod nvme_fabrics 00:24:54.727 rmmod nvme_keyring 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1645692 ']' 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1645692 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1645692 ']' 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1645692 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.727 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1645692 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1645692' 00:24:54.990 killing process with pid 1645692 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1645692 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1645692 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.990 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.4a7 00:24:57.541 00:24:57.541 real 0m16.731s 00:24:57.541 user 0m22.411s 00:24:57.541 sys 0m5.279s 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:57.541 ************************************ 00:24:57.541 END TEST nvmf_fips 00:24:57.541 ************************************ 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:57.541 ************************************ 00:24:57.541 START TEST nvmf_control_msg_list 00:24:57.541 ************************************ 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:57.541 * Looking for test storage... 00:24:57.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:57.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.541 --rc genhtml_branch_coverage=1 00:24:57.541 --rc genhtml_function_coverage=1 00:24:57.541 --rc genhtml_legend=1 00:24:57.541 --rc geninfo_all_blocks=1 00:24:57.541 --rc geninfo_unexecuted_blocks=1 00:24:57.541 00:24:57.541 ' 00:24:57.541 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:57.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.541 --rc genhtml_branch_coverage=1 00:24:57.541 --rc genhtml_function_coverage=1 00:24:57.541 --rc genhtml_legend=1 00:24:57.541 --rc geninfo_all_blocks=1 00:24:57.541 --rc geninfo_unexecuted_blocks=1 00:24:57.541 00:24:57.542 ' 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:57.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.542 --rc genhtml_branch_coverage=1 00:24:57.542 --rc genhtml_function_coverage=1 00:24:57.542 --rc genhtml_legend=1 00:24:57.542 --rc geninfo_all_blocks=1 00:24:57.542 --rc geninfo_unexecuted_blocks=1 00:24:57.542 00:24:57.542 ' 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:57.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.542 --rc genhtml_branch_coverage=1 00:24:57.542 --rc genhtml_function_coverage=1 00:24:57.542 --rc genhtml_legend=1 00:24:57.542 --rc geninfo_all_blocks=1 00:24:57.542 --rc geninfo_unexecuted_blocks=1 00:24:57.542 00:24:57.542 ' 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.542 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:59.444 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:59.444 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:59.444 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:59.444 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:59.444 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:24:59.445 00:24:59.445 --- 10.0.0.2 ping statistics --- 00:24:59.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.445 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:24:59.445 00:24:59.445 --- 10.0.0.1 ping statistics --- 00:24:59.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.445 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1648974 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1648974 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1648974 ']' 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:59.445 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.445 [2024-10-13 01:35:44.982861] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:24:59.445 [2024-10-13 01:35:44.982957] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.711 [2024-10-13 01:35:45.051569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.711 [2024-10-13 01:35:45.101354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.711 [2024-10-13 01:35:45.101420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.711 [2024-10-13 01:35:45.101447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.711 [2024-10-13 01:35:45.101460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.711 [2024-10-13 01:35:45.101481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.711 [2024-10-13 01:35:45.102137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:59.711 [2024-10-13 01:35:45.276477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:59.711 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.033 Malloc0 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:00.033 [2024-10-13 01:35:45.316173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1649115 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1649116 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1649117 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:00.033 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1649115 00:25:00.033 [2024-10-13 01:35:45.385159] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.033 [2024-10-13 01:35:45.385466] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.033 [2024-10-13 01:35:45.385741] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:00.967 Initializing NVMe Controllers 00:25:00.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:00.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:00.967 Initialization complete. Launching workers. 00:25:00.967 ======================================================== 00:25:00.967 Latency(us) 00:25:00.967 Device Information : IOPS MiB/s Average min max 00:25:00.967 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 62.00 0.24 16144.34 228.65 41885.39 00:25:00.967 ======================================================== 00:25:00.967 Total : 62.00 0.24 16144.34 228.65 41885.39 00:25:00.967 00:25:00.967 Initializing NVMe Controllers 00:25:00.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:00.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:00.967 Initialization complete. Launching workers. 00:25:00.967 ======================================================== 00:25:00.967 Latency(us) 00:25:00.967 Device Information : IOPS MiB/s Average min max 00:25:00.967 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 26.00 0.10 39316.70 191.73 40961.63 00:25:00.967 ======================================================== 00:25:00.967 Total : 26.00 0.10 39316.70 191.73 40961.63 00:25:00.967 00:25:00.967 Initializing NVMe Controllers 00:25:00.967 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:00.967 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:00.967 Initialization complete. Launching workers. 00:25:00.967 ======================================================== 00:25:00.967 Latency(us) 00:25:00.967 Device Information : IOPS MiB/s Average min max 00:25:00.967 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40893.66 40756.85 40930.89 00:25:00.967 ======================================================== 00:25:00.967 Total : 25.00 0.10 40893.66 40756.85 40930.89 00:25:00.967 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1649116 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1649117 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.967 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.967 rmmod nvme_tcp 00:25:01.225 rmmod nvme_fabrics 00:25:01.225 rmmod nvme_keyring 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1648974 ']' 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1648974 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1648974 ']' 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1648974 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1648974 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1648974' 00:25:01.225 killing process with pid 1648974 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1648974 00:25:01.225 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1648974 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.483 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.387 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.387 00:25:03.387 real 0m6.249s 00:25:03.387 user 0m5.724s 00:25:03.387 sys 0m2.384s 00:25:03.387 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.387 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:03.387 ************************************ 00:25:03.388 END TEST nvmf_control_msg_list 00:25:03.388 ************************************ 00:25:03.388 01:35:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:03.388 01:35:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:03.388 01:35:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:03.388 01:35:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:03.388 ************************************ 00:25:03.388 START TEST nvmf_wait_for_buf 00:25:03.388 ************************************ 00:25:03.388 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:03.646 * Looking for test storage... 00:25:03.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:03.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.646 --rc genhtml_branch_coverage=1 00:25:03.646 --rc genhtml_function_coverage=1 00:25:03.646 --rc genhtml_legend=1 00:25:03.646 --rc geninfo_all_blocks=1 00:25:03.646 --rc geninfo_unexecuted_blocks=1 00:25:03.646 00:25:03.646 ' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:03.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.646 --rc genhtml_branch_coverage=1 00:25:03.646 --rc genhtml_function_coverage=1 00:25:03.646 --rc genhtml_legend=1 00:25:03.646 --rc geninfo_all_blocks=1 00:25:03.646 --rc geninfo_unexecuted_blocks=1 00:25:03.646 00:25:03.646 ' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:03.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.646 --rc genhtml_branch_coverage=1 00:25:03.646 --rc genhtml_function_coverage=1 00:25:03.646 --rc genhtml_legend=1 00:25:03.646 --rc geninfo_all_blocks=1 00:25:03.646 --rc geninfo_unexecuted_blocks=1 00:25:03.646 00:25:03.646 ' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:03.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.646 --rc genhtml_branch_coverage=1 00:25:03.646 --rc genhtml_function_coverage=1 00:25:03.646 --rc genhtml_legend=1 00:25:03.646 --rc geninfo_all_blocks=1 00:25:03.646 --rc geninfo_unexecuted_blocks=1 00:25:03.646 00:25:03.646 ' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.646 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.647 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.547 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.548 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.805 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:25:05.806 00:25:05.806 --- 10.0.0.2 ping statistics --- 00:25:05.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.806 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:25:05.806 00:25:05.806 --- 10.0.0.1 ping statistics --- 00:25:05.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.806 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1651194 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1651194 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1651194 ']' 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.806 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:05.806 [2024-10-13 01:35:51.314558] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:25:05.806 [2024-10-13 01:35:51.314653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.806 [2024-10-13 01:35:51.383385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.064 [2024-10-13 01:35:51.430480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.064 [2024-10-13 01:35:51.430551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.064 [2024-10-13 01:35:51.430577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.064 [2024-10-13 01:35:51.430591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.064 [2024-10-13 01:35:51.430603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.064 [2024-10-13 01:35:51.431244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.064 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.323 Malloc0 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.323 [2024-10-13 01:35:51.688048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:06.323 [2024-10-13 01:35:51.712275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.323 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.323 [2024-10-13 01:35:51.784626] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:07.694 Initializing NVMe Controllers 00:25:07.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:07.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:07.694 Initialization complete. Launching workers. 00:25:07.694 ======================================================== 00:25:07.694 Latency(us) 00:25:07.694 Device Information : IOPS MiB/s Average min max 00:25:07.694 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 71.75 8.97 58298.87 30920.12 111731.69 00:25:07.694 ======================================================== 00:25:07.694 Total : 71.75 8.97 58298.87 30920.12 111731.69 00:25:07.694 00:25:07.694 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:07.694 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:07.694 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.694 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:07.694 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1126 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1126 -eq 0 ]] 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.951 rmmod nvme_tcp 00:25:07.951 rmmod nvme_fabrics 00:25:07.951 rmmod nvme_keyring 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1651194 ']' 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1651194 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1651194 ']' 00:25:07.951 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1651194 00:25:07.952 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:25:07.952 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.952 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1651194 00:25:07.952 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:07.952 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:07.952 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1651194' 00:25:07.952 killing process with pid 1651194 00:25:07.952 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1651194 00:25:07.952 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1651194 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.211 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.111 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.111 00:25:10.111 real 0m6.667s 00:25:10.111 user 0m3.103s 00:25:10.111 sys 0m2.025s 00:25:10.111 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:10.111 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:10.111 ************************************ 00:25:10.111 END TEST nvmf_wait_for_buf 00:25:10.111 ************************************ 00:25:10.111 01:35:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:10.111 01:35:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:10.111 01:35:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:10.111 01:35:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:10.111 01:35:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:10.111 ************************************ 00:25:10.111 START TEST nvmf_fuzz 00:25:10.111 ************************************ 00:25:10.111 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:10.370 * Looking for test storage... 00:25:10.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:10.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.370 --rc genhtml_branch_coverage=1 00:25:10.370 --rc genhtml_function_coverage=1 00:25:10.370 --rc genhtml_legend=1 00:25:10.370 --rc geninfo_all_blocks=1 00:25:10.370 --rc geninfo_unexecuted_blocks=1 00:25:10.370 00:25:10.370 ' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:10.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.370 --rc genhtml_branch_coverage=1 00:25:10.370 --rc genhtml_function_coverage=1 00:25:10.370 --rc genhtml_legend=1 00:25:10.370 --rc geninfo_all_blocks=1 00:25:10.370 --rc geninfo_unexecuted_blocks=1 00:25:10.370 00:25:10.370 ' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:10.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.370 --rc genhtml_branch_coverage=1 00:25:10.370 --rc genhtml_function_coverage=1 00:25:10.370 --rc genhtml_legend=1 00:25:10.370 --rc geninfo_all_blocks=1 00:25:10.370 --rc geninfo_unexecuted_blocks=1 00:25:10.370 00:25:10.370 ' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:10.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.370 --rc genhtml_branch_coverage=1 00:25:10.370 --rc genhtml_function_coverage=1 00:25:10.370 --rc genhtml_legend=1 00:25:10.370 --rc geninfo_all_blocks=1 00:25:10.370 --rc geninfo_unexecuted_blocks=1 00:25:10.370 00:25:10.370 ' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.370 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:10.370 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.371 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:12.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:12.899 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:12.899 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:12.899 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # is_hw=yes 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:12.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:25:12.900 00:25:12.900 --- 10.0.0.2 ping statistics --- 00:25:12.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.900 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:25:12.900 00:25:12.900 --- 10.0.0.1 ping statistics --- 00:25:12.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.900 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # return 0 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1653412 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1653412 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1653412 ']' 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.900 Malloc0 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:12.900 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:44.962 Fuzzing completed. Shutting down the fuzz application 00:25:44.962 00:25:44.962 Dumping successful admin opcodes: 00:25:44.962 8, 9, 10, 24, 00:25:44.962 Dumping successful io opcodes: 00:25:44.962 0, 9, 00:25:44.962 NS: 0x2000008eff00 I/O qp, Total commands completed: 451873, total successful commands: 2628, random_seed: 1918627968 00:25:44.962 NS: 0x2000008eff00 admin qp, Total commands completed: 55568, total successful commands: 443, random_seed: 2983298560 00:25:44.962 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:44.962 Fuzzing completed. Shutting down the fuzz application 00:25:44.962 00:25:44.962 Dumping successful admin opcodes: 00:25:44.962 24, 00:25:44.962 Dumping successful io opcodes: 00:25:44.962 00:25:44.962 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1840669457 00:25:44.962 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1840775846 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.962 rmmod nvme_tcp 00:25:44.962 rmmod nvme_fabrics 00:25:44.962 rmmod nvme_keyring 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@515 -- # '[' -n 1653412 ']' 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # killprocess 1653412 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1653412 ']' 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 1653412 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1653412 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1653412' 00:25:44.962 killing process with pid 1653412 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 1653412 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 1653412 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-save 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@789 -- # iptables-restore 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.962 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.863 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:46.863 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:46.863 00:25:46.863 real 0m36.732s 00:25:46.863 user 0m50.579s 00:25:46.863 sys 0m15.133s 00:25:46.863 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:46.863 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:46.863 ************************************ 00:25:46.863 END TEST nvmf_fuzz 00:25:46.863 ************************************ 00:25:46.863 01:36:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:46.863 01:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:46.863 01:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:46.863 01:36:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:47.122 ************************************ 00:25:47.122 START TEST nvmf_multiconnection 00:25:47.122 ************************************ 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:47.122 * Looking for test storage... 00:25:47.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lcov --version 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.122 --rc genhtml_branch_coverage=1 00:25:47.122 --rc genhtml_function_coverage=1 00:25:47.122 --rc genhtml_legend=1 00:25:47.122 --rc geninfo_all_blocks=1 00:25:47.122 --rc geninfo_unexecuted_blocks=1 00:25:47.122 00:25:47.122 ' 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.122 --rc genhtml_branch_coverage=1 00:25:47.122 --rc genhtml_function_coverage=1 00:25:47.122 --rc genhtml_legend=1 00:25:47.122 --rc geninfo_all_blocks=1 00:25:47.122 --rc geninfo_unexecuted_blocks=1 00:25:47.122 00:25:47.122 ' 00:25:47.122 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:47.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.122 --rc genhtml_branch_coverage=1 00:25:47.122 --rc genhtml_function_coverage=1 00:25:47.122 --rc genhtml_legend=1 00:25:47.122 --rc geninfo_all_blocks=1 00:25:47.123 --rc geninfo_unexecuted_blocks=1 00:25:47.123 00:25:47.123 ' 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:47.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.123 --rc genhtml_branch_coverage=1 00:25:47.123 --rc genhtml_function_coverage=1 00:25:47.123 --rc genhtml_legend=1 00:25:47.123 --rc geninfo_all_blocks=1 00:25:47.123 --rc geninfo_unexecuted_blocks=1 00:25:47.123 00:25:47.123 ' 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.123 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:49.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:49.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:49.653 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:49.653 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # is_hw=yes 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.653 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:25:49.654 00:25:49.654 --- 10.0.0.2 ping statistics --- 00:25:49.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.654 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:25:49.654 00:25:49.654 --- 10.0.0.1 ping statistics --- 00:25:49.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.654 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # return 0 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # nvmfpid=1659628 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # waitforlisten 1659628 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 1659628 ']' 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:49.654 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.654 [2024-10-13 01:36:34.860824] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:25:49.654 [2024-10-13 01:36:34.860909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.654 [2024-10-13 01:36:34.936121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.654 [2024-10-13 01:36:34.988645] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.654 [2024-10-13 01:36:34.988713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.654 [2024-10-13 01:36:34.988738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.654 [2024-10-13 01:36:34.988752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.654 [2024-10-13 01:36:34.988764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.654 [2024-10-13 01:36:34.990444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.654 [2024-10-13 01:36:34.990508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.654 [2024-10-13 01:36:34.990538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.654 [2024-10-13 01:36:34.990542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.654 [2024-10-13 01:36:35.139483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.654 Malloc1 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.654 [2024-10-13 01:36:35.211236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.654 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.912 Malloc2 00:25:49.912 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.912 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:49.912 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.912 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.912 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.912 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 Malloc3 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 Malloc4 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 Malloc5 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 Malloc6 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 Malloc7 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.913 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 Malloc8 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 Malloc9 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 Malloc10 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 Malloc11 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.172 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:50.737 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:50.737 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:50.737 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:50.737 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:50.737 01:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:53.262 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:53.262 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:53.262 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:25:53.262 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:53.262 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.262 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:53.262 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.262 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:53.520 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:53.520 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:53.520 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:53.520 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:53.520 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:55.417 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:55.417 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:55.417 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:25:55.417 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:55.417 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:55.417 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:55.417 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:55.417 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:56.349 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:56.349 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:56.349 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.349 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:56.349 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:58.303 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:58.303 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:58.303 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:25:58.303 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:58.303 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.303 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:58.303 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:58.303 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:58.868 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:58.868 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:58.868 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:58.868 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:58.868 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:00.764 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:00.764 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:00.764 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:00.764 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:00.764 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:00.764 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:00.764 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:00.764 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:01.697 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:01.697 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:01.697 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:01.697 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:01.697 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:03.592 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:03.592 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:03.592 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:03.850 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:03.850 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:03.850 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:03.850 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:03.850 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:04.415 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:04.415 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:04.415 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:04.415 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:04.415 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:06.941 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:06.941 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:06.941 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:06.941 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:06.941 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:06.941 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:06.941 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:06.941 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:07.198 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:07.198 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:07.198 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:07.198 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:07.198 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:09.723 01:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:09.723 01:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:09.724 01:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:09.724 01:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:09.724 01:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:09.724 01:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:09.724 01:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:09.724 01:36:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:09.981 01:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:09.981 01:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:09.981 01:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:09.981 01:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:09.981 01:36:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:12.505 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:12.505 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:12.505 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:12.505 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:12.505 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:12.505 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:12.505 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.505 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:13.069 01:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:13.069 01:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:13.069 01:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:13.069 01:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:13.069 01:36:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:14.965 01:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:14.965 01:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:14.965 01:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:14.965 01:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:14.965 01:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:14.965 01:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:14.965 01:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.965 01:37:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:15.897 01:37:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:15.897 01:37:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:15.897 01:37:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:15.897 01:37:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:15.897 01:37:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:17.794 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:17.794 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:17.794 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:17.794 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:17.794 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:17.794 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:17.794 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.794 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:19.164 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:19.164 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:19.164 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:19.164 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:19.164 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:21.060 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:21.060 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:21.060 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:21.060 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:21.060 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.060 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:21.060 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:21.060 [global] 00:26:21.060 thread=1 00:26:21.060 invalidate=1 00:26:21.060 rw=read 00:26:21.060 time_based=1 00:26:21.060 runtime=10 00:26:21.060 ioengine=libaio 00:26:21.060 direct=1 00:26:21.060 bs=262144 00:26:21.060 iodepth=64 00:26:21.060 norandommap=1 00:26:21.060 numjobs=1 00:26:21.060 00:26:21.060 [job0] 00:26:21.060 filename=/dev/nvme0n1 00:26:21.060 [job1] 00:26:21.060 filename=/dev/nvme10n1 00:26:21.060 [job2] 00:26:21.060 filename=/dev/nvme1n1 00:26:21.060 [job3] 00:26:21.060 filename=/dev/nvme2n1 00:26:21.060 [job4] 00:26:21.060 filename=/dev/nvme3n1 00:26:21.060 [job5] 00:26:21.060 filename=/dev/nvme4n1 00:26:21.060 [job6] 00:26:21.060 filename=/dev/nvme5n1 00:26:21.060 [job7] 00:26:21.060 filename=/dev/nvme6n1 00:26:21.060 [job8] 00:26:21.060 filename=/dev/nvme7n1 00:26:21.060 [job9] 00:26:21.060 filename=/dev/nvme8n1 00:26:21.060 [job10] 00:26:21.060 filename=/dev/nvme9n1 00:26:21.060 Could not set queue depth (nvme0n1) 00:26:21.060 Could not set queue depth (nvme10n1) 00:26:21.060 Could not set queue depth (nvme1n1) 00:26:21.060 Could not set queue depth (nvme2n1) 00:26:21.060 Could not set queue depth (nvme3n1) 00:26:21.060 Could not set queue depth (nvme4n1) 00:26:21.060 Could not set queue depth (nvme5n1) 00:26:21.060 Could not set queue depth (nvme6n1) 00:26:21.060 Could not set queue depth (nvme7n1) 00:26:21.060 Could not set queue depth (nvme8n1) 00:26:21.060 Could not set queue depth (nvme9n1) 00:26:21.319 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:21.319 fio-3.35 00:26:21.319 Starting 11 threads 00:26:33.512 00:26:33.512 job0: (groupid=0, jobs=1): err= 0: pid=1663889: Sun Oct 13 01:37:17 2024 00:26:33.512 read: IOPS=169, BW=42.4MiB/s (44.4MB/s)(430MiB/10140msec) 00:26:33.512 slat (usec): min=12, max=254321, avg=5066.62, stdev=20271.31 00:26:33.512 clat (msec): min=15, max=921, avg=372.32, stdev=203.72 00:26:33.512 lat (msec): min=15, max=921, avg=377.38, stdev=207.53 00:26:33.512 clat percentiles (msec): 00:26:33.512 | 1.00th=[ 25], 5.00th=[ 69], 10.00th=[ 121], 20.00th=[ 182], 00:26:33.512 | 30.00th=[ 255], 40.00th=[ 288], 50.00th=[ 326], 60.00th=[ 422], 00:26:33.512 | 70.00th=[ 477], 80.00th=[ 535], 90.00th=[ 693], 95.00th=[ 760], 00:26:33.512 | 99.00th=[ 827], 99.50th=[ 885], 99.90th=[ 894], 99.95th=[ 919], 00:26:33.512 | 99.99th=[ 919] 00:26:33.512 bw ( KiB/s): min=14848, max=109568, per=4.99%, avg=42368.00, stdev=25315.40, samples=20 00:26:33.512 iops : min= 58, max= 428, avg=165.50, stdev=98.89, samples=20 00:26:33.512 lat (msec) : 20=0.76%, 50=1.69%, 100=3.73%, 250=23.40%, 500=46.22% 00:26:33.512 lat (msec) : 750=18.74%, 1000=5.47% 00:26:33.512 cpu : usr=0.15%, sys=0.61%, ctx=336, majf=0, minf=3721 00:26:33.512 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:26:33.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.512 issued rwts: total=1718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.512 job1: (groupid=0, jobs=1): err= 0: pid=1663890: Sun Oct 13 01:37:17 2024 00:26:33.512 read: IOPS=191, BW=47.9MiB/s (50.2MB/s)(486MiB/10140msec) 00:26:33.512 slat (usec): min=9, max=405742, avg=3992.71, stdev=19738.60 00:26:33.512 clat (msec): min=34, max=1068, avg=329.95, stdev=177.37 00:26:33.512 lat (msec): min=35, max=1068, avg=333.94, stdev=179.84 00:26:33.512 clat percentiles (msec): 00:26:33.512 | 1.00th=[ 79], 5.00th=[ 144], 10.00th=[ 176], 20.00th=[ 194], 00:26:33.512 | 30.00th=[ 213], 40.00th=[ 230], 50.00th=[ 255], 60.00th=[ 305], 00:26:33.512 | 70.00th=[ 393], 80.00th=[ 477], 90.00th=[ 625], 95.00th=[ 684], 00:26:33.512 | 99.00th=[ 818], 99.50th=[ 827], 99.90th=[ 1003], 99.95th=[ 1070], 00:26:33.512 | 99.99th=[ 1070] 00:26:33.512 bw ( KiB/s): min=14848, max=83456, per=5.67%, avg=48081.15, stdev=20617.91, samples=20 00:26:33.512 iops : min= 58, max= 326, avg=187.80, stdev=80.54, samples=20 00:26:33.512 lat (msec) : 50=0.51%, 100=1.96%, 250=45.83%, 500=34.86%, 750=13.65% 00:26:33.512 lat (msec) : 1000=3.04%, 2000=0.15% 00:26:33.512 cpu : usr=0.07%, sys=0.51%, ctx=312, majf=0, minf=4097 00:26:33.512 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:33.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.512 issued rwts: total=1942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.512 job2: (groupid=0, jobs=1): err= 0: pid=1663891: Sun Oct 13 01:37:17 2024 00:26:33.512 read: IOPS=175, BW=43.9MiB/s (46.0MB/s)(445MiB/10143msec) 00:26:33.512 slat (usec): min=8, max=355477, avg=4811.22, stdev=20470.86 00:26:33.512 clat (msec): min=2, max=903, avg=359.61, stdev=225.63 00:26:33.512 lat (msec): min=2, max=903, avg=364.43, stdev=228.49 00:26:33.512 clat percentiles (msec): 00:26:33.512 | 1.00th=[ 27], 5.00th=[ 49], 10.00th=[ 89], 20.00th=[ 121], 00:26:33.512 | 30.00th=[ 167], 40.00th=[ 271], 50.00th=[ 376], 60.00th=[ 426], 00:26:33.512 | 70.00th=[ 502], 80.00th=[ 584], 90.00th=[ 693], 95.00th=[ 743], 00:26:33.512 | 99.00th=[ 818], 99.50th=[ 844], 99.90th=[ 902], 99.95th=[ 902], 00:26:33.512 | 99.99th=[ 902] 00:26:33.512 bw ( KiB/s): min=19456, max=131584, per=5.18%, avg=43929.60, stdev=30068.42, samples=20 00:26:33.512 iops : min= 76, max= 514, avg=171.60, stdev=117.45, samples=20 00:26:33.512 lat (msec) : 4=0.22%, 10=0.22%, 20=0.28%, 50=4.89%, 100=6.35% 00:26:33.512 lat (msec) : 250=26.52%, 500=31.12%, 750=26.35%, 1000=4.04% 00:26:33.512 cpu : usr=0.09%, sys=0.66%, ctx=272, majf=0, minf=4098 00:26:33.512 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:26:33.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.512 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.512 job3: (groupid=0, jobs=1): err= 0: pid=1663892: Sun Oct 13 01:37:17 2024 00:26:33.512 read: IOPS=240, BW=60.2MiB/s (63.1MB/s)(604MiB/10045msec) 00:26:33.512 slat (usec): min=13, max=219406, avg=4044.23, stdev=15545.92 00:26:33.512 clat (msec): min=24, max=715, avg=261.74, stdev=173.77 00:26:33.512 lat (msec): min=24, max=743, avg=265.79, stdev=176.21 00:26:33.512 clat percentiles (msec): 00:26:33.512 | 1.00th=[ 53], 5.00th=[ 78], 10.00th=[ 86], 20.00th=[ 100], 00:26:33.512 | 30.00th=[ 116], 40.00th=[ 157], 50.00th=[ 211], 60.00th=[ 271], 00:26:33.512 | 70.00th=[ 363], 80.00th=[ 422], 90.00th=[ 527], 95.00th=[ 617], 00:26:33.512 | 99.00th=[ 684], 99.50th=[ 693], 99.90th=[ 718], 99.95th=[ 718], 00:26:33.512 | 99.99th=[ 718] 00:26:33.512 bw ( KiB/s): min=20992, max=171008, per=7.10%, avg=60262.40, stdev=40162.72, samples=20 00:26:33.512 iops : min= 82, max= 668, avg=235.40, stdev=156.89, samples=20 00:26:33.512 lat (msec) : 50=0.95%, 100=19.86%, 250=33.76%, 500=33.72%, 750=11.71% 00:26:33.512 cpu : usr=0.16%, sys=0.90%, ctx=325, majf=0, minf=4097 00:26:33.512 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:33.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.512 issued rwts: total=2417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.512 job4: (groupid=0, jobs=1): err= 0: pid=1663893: Sun Oct 13 01:37:17 2024 00:26:33.512 read: IOPS=463, BW=116MiB/s (121MB/s)(1175MiB/10143msec) 00:26:33.512 slat (usec): min=13, max=178847, avg=2036.85, stdev=9906.13 00:26:33.512 clat (usec): min=1783, max=884761, avg=135981.73, stdev=173224.38 00:26:33.512 lat (usec): min=1822, max=884848, avg=138018.58, stdev=175859.13 00:26:33.512 clat percentiles (msec): 00:26:33.512 | 1.00th=[ 3], 5.00th=[ 25], 10.00th=[ 42], 20.00th=[ 47], 00:26:33.512 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 69], 60.00th=[ 81], 00:26:33.512 | 70.00th=[ 96], 80.00th=[ 148], 90.00th=[ 405], 95.00th=[ 592], 00:26:33.512 | 99.00th=[ 768], 99.50th=[ 793], 99.90th=[ 852], 99.95th=[ 860], 00:26:33.512 | 99.99th=[ 885] 00:26:33.512 bw ( KiB/s): min=15872, max=329728, per=13.98%, avg=118667.40, stdev=103572.51, samples=20 00:26:33.512 iops : min= 62, max= 1288, avg=463.50, stdev=404.58, samples=20 00:26:33.512 lat (msec) : 2=0.02%, 4=3.23%, 10=0.51%, 20=0.47%, 50=18.86% 00:26:33.512 lat (msec) : 100=48.03%, 250=14.64%, 500=7.28%, 750=5.09%, 1000=1.87% 00:26:33.512 cpu : usr=0.36%, sys=1.87%, ctx=998, majf=0, minf=4097 00:26:33.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:33.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.512 issued rwts: total=4699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.512 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.512 job5: (groupid=0, jobs=1): err= 0: pid=1663894: Sun Oct 13 01:37:17 2024 00:26:33.512 read: IOPS=270, BW=67.5MiB/s (70.8MB/s)(679MiB/10061msec) 00:26:33.512 slat (usec): min=9, max=176204, avg=2861.06, stdev=12663.12 00:26:33.512 clat (msec): min=16, max=685, avg=233.99, stdev=133.31 00:26:33.512 lat (msec): min=16, max=713, avg=236.85, stdev=135.22 00:26:33.512 clat percentiles (msec): 00:26:33.512 | 1.00th=[ 32], 5.00th=[ 58], 10.00th=[ 75], 20.00th=[ 123], 00:26:33.512 | 30.00th=[ 150], 40.00th=[ 167], 50.00th=[ 203], 60.00th=[ 255], 00:26:33.512 | 70.00th=[ 313], 80.00th=[ 338], 90.00th=[ 397], 95.00th=[ 502], 00:26:33.512 | 99.00th=[ 617], 99.50th=[ 642], 99.90th=[ 684], 99.95th=[ 684], 00:26:33.512 | 99.99th=[ 684] 00:26:33.512 bw ( KiB/s): min=23040, max=173568, per=8.01%, avg=67942.40, stdev=38561.56, samples=20 00:26:33.512 iops : min= 90, max= 678, avg=265.40, stdev=150.63, samples=20 00:26:33.512 lat (msec) : 20=0.52%, 50=2.80%, 100=12.85%, 250=42.69%, 500=35.63% 00:26:33.512 lat (msec) : 750=5.52% 00:26:33.512 cpu : usr=0.09%, sys=0.76%, ctx=409, majf=0, minf=4098 00:26:33.512 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:33.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.512 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.512 issued rwts: total=2717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.513 job6: (groupid=0, jobs=1): err= 0: pid=1663895: Sun Oct 13 01:37:17 2024 00:26:33.513 read: IOPS=242, BW=60.6MiB/s (63.5MB/s)(614MiB/10143msec) 00:26:33.513 slat (usec): min=9, max=197354, avg=2335.96, stdev=13306.44 00:26:33.513 clat (msec): min=34, max=850, avg=261.67, stdev=208.93 00:26:33.513 lat (msec): min=34, max=888, avg=264.00, stdev=211.36 00:26:33.513 clat percentiles (msec): 00:26:33.513 | 1.00th=[ 49], 5.00th=[ 55], 10.00th=[ 63], 20.00th=[ 73], 00:26:33.513 | 30.00th=[ 84], 40.00th=[ 129], 50.00th=[ 209], 60.00th=[ 284], 00:26:33.513 | 70.00th=[ 347], 80.00th=[ 451], 90.00th=[ 567], 95.00th=[ 718], 00:26:33.513 | 99.00th=[ 802], 99.50th=[ 818], 99.90th=[ 844], 99.95th=[ 852], 00:26:33.513 | 99.99th=[ 852] 00:26:33.513 bw ( KiB/s): min=20992, max=217088, per=7.22%, avg=61260.80, stdev=51750.88, samples=20 00:26:33.513 iops : min= 82, max= 848, avg=239.30, stdev=202.15, samples=20 00:26:33.513 lat (msec) : 50=1.67%, 100=34.11%, 250=19.98%, 500=29.10%, 750=11.48% 00:26:33.513 lat (msec) : 1000=3.66% 00:26:33.513 cpu : usr=0.17%, sys=0.64%, ctx=418, majf=0, minf=4097 00:26:33.513 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:33.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.513 issued rwts: total=2457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.513 job7: (groupid=0, jobs=1): err= 0: pid=1663896: Sun Oct 13 01:37:17 2024 00:26:33.513 read: IOPS=341, BW=85.3MiB/s (89.4MB/s)(857MiB/10050msec) 00:26:33.513 slat (usec): min=9, max=342454, avg=1903.39, stdev=11797.61 00:26:33.513 clat (msec): min=2, max=845, avg=185.51, stdev=163.77 00:26:33.513 lat (msec): min=2, max=1061, avg=187.41, stdev=165.70 00:26:33.513 clat percentiles (msec): 00:26:33.513 | 1.00th=[ 4], 5.00th=[ 21], 10.00th=[ 46], 20.00th=[ 75], 00:26:33.513 | 30.00th=[ 91], 40.00th=[ 112], 50.00th=[ 123], 60.00th=[ 146], 00:26:33.513 | 70.00th=[ 184], 80.00th=[ 266], 90.00th=[ 464], 95.00th=[ 542], 00:26:33.513 | 99.00th=[ 735], 99.50th=[ 751], 99.90th=[ 835], 99.95th=[ 844], 00:26:33.513 | 99.99th=[ 844] 00:26:33.513 bw ( KiB/s): min=19456, max=197632, per=10.15%, avg=86169.60, stdev=50558.98, samples=20 00:26:33.513 iops : min= 76, max= 772, avg=336.60, stdev=197.50, samples=20 00:26:33.513 lat (msec) : 4=1.28%, 10=1.46%, 20=1.92%, 50=6.50%, 100=22.02% 00:26:33.513 lat (msec) : 250=45.93%, 500=13.56%, 750=6.85%, 1000=0.47% 00:26:33.513 cpu : usr=0.24%, sys=1.10%, ctx=892, majf=0, minf=4097 00:26:33.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:33.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.513 issued rwts: total=3429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.513 job8: (groupid=0, jobs=1): err= 0: pid=1663897: Sun Oct 13 01:37:17 2024 00:26:33.513 read: IOPS=558, BW=140MiB/s (146MB/s)(1406MiB/10064msec) 00:26:33.513 slat (usec): min=12, max=135828, avg=1775.96, stdev=7676.33 00:26:33.513 clat (msec): min=23, max=499, avg=112.70, stdev=103.37 00:26:33.513 lat (msec): min=23, max=499, avg=114.48, stdev=104.90 00:26:33.513 clat percentiles (msec): 00:26:33.513 | 1.00th=[ 31], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 40], 00:26:33.513 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 55], 60.00th=[ 94], 00:26:33.513 | 70.00th=[ 113], 80.00th=[ 194], 90.00th=[ 300], 95.00th=[ 342], 00:26:33.513 | 99.00th=[ 401], 99.50th=[ 414], 99.90th=[ 498], 99.95th=[ 498], 00:26:33.513 | 99.99th=[ 502] 00:26:33.513 bw ( KiB/s): min=40448, max=413184, per=16.77%, avg=142284.80, stdev=118675.45, samples=20 00:26:33.513 iops : min= 158, max= 1614, avg=555.80, stdev=463.58, samples=20 00:26:33.513 lat (msec) : 50=47.53%, 100=15.69%, 250=22.11%, 500=14.67% 00:26:33.513 cpu : usr=0.41%, sys=2.10%, ctx=776, majf=0, minf=4097 00:26:33.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:33.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.513 issued rwts: total=5622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.513 job9: (groupid=0, jobs=1): err= 0: pid=1663898: Sun Oct 13 01:37:17 2024 00:26:33.513 read: IOPS=259, BW=64.8MiB/s (68.0MB/s)(651MiB/10043msec) 00:26:33.513 slat (usec): min=13, max=185824, avg=3836.43, stdev=15881.81 00:26:33.513 clat (msec): min=29, max=872, avg=242.74, stdev=182.73 00:26:33.513 lat (msec): min=29, max=873, avg=246.57, stdev=185.46 00:26:33.513 clat percentiles (msec): 00:26:33.513 | 1.00th=[ 41], 5.00th=[ 60], 10.00th=[ 71], 20.00th=[ 87], 00:26:33.513 | 30.00th=[ 95], 40.00th=[ 110], 50.00th=[ 176], 60.00th=[ 249], 00:26:33.513 | 70.00th=[ 330], 80.00th=[ 409], 90.00th=[ 523], 95.00th=[ 584], 00:26:33.513 | 99.00th=[ 751], 99.50th=[ 793], 99.90th=[ 802], 99.95th=[ 877], 00:26:33.513 | 99.99th=[ 877] 00:26:33.513 bw ( KiB/s): min=20992, max=181760, per=7.67%, avg=65075.20, stdev=49352.27, samples=20 00:26:33.513 iops : min= 82, max= 710, avg=254.20, stdev=192.78, samples=20 00:26:33.513 lat (msec) : 50=2.34%, 100=30.94%, 250=26.87%, 500=27.56%, 750=11.25% 00:26:33.513 lat (msec) : 1000=1.04% 00:26:33.513 cpu : usr=0.09%, sys=1.03%, ctx=336, majf=0, minf=4098 00:26:33.513 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:33.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.513 issued rwts: total=2605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.513 job10: (groupid=0, jobs=1): err= 0: pid=1663900: Sun Oct 13 01:37:17 2024 00:26:33.513 read: IOPS=417, BW=104MiB/s (110MB/s)(1060MiB/10142msec) 00:26:33.513 slat (usec): min=9, max=101085, avg=2218.43, stdev=8259.73 00:26:33.513 clat (msec): min=23, max=678, avg=150.77, stdev=132.42 00:26:33.513 lat (msec): min=23, max=678, avg=152.99, stdev=134.25 00:26:33.513 clat percentiles (msec): 00:26:33.513 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 44], 00:26:33.513 | 30.00th=[ 49], 40.00th=[ 96], 50.00th=[ 113], 60.00th=[ 136], 00:26:33.513 | 70.00th=[ 176], 80.00th=[ 220], 90.00th=[ 338], 95.00th=[ 477], 00:26:33.513 | 99.00th=[ 592], 99.50th=[ 609], 99.90th=[ 667], 99.95th=[ 676], 00:26:33.513 | 99.99th=[ 676] 00:26:33.513 bw ( KiB/s): min=23040, max=392704, per=12.59%, avg=106880.00, stdev=92810.71, samples=20 00:26:33.513 iops : min= 90, max= 1534, avg=417.50, stdev=362.54, samples=20 00:26:33.513 lat (msec) : 50=30.25%, 100=12.08%, 250=42.26%, 500=11.94%, 750=3.47% 00:26:33.513 cpu : usr=0.24%, sys=1.41%, ctx=654, majf=0, minf=4097 00:26:33.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:33.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:33.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:33.513 issued rwts: total=4238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:33.513 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:33.513 00:26:33.513 Run status group 0 (all jobs): 00:26:33.513 READ: bw=829MiB/s (869MB/s), 42.4MiB/s-140MiB/s (44.4MB/s-146MB/s), io=8406MiB (8814MB), run=10043-10143msec 00:26:33.513 00:26:33.513 Disk stats (read/write): 00:26:33.513 nvme0n1: ios=3290/0, merge=0/0, ticks=1188436/0, in_queue=1188436, util=97.41% 00:26:33.513 nvme10n1: ios=3762/0, merge=0/0, ticks=1205730/0, in_queue=1205730, util=97.60% 00:26:33.513 nvme1n1: ios=3427/0, merge=0/0, ticks=1213891/0, in_queue=1213891, util=97.86% 00:26:33.513 nvme2n1: ios=4646/0, merge=0/0, ticks=1242390/0, in_queue=1242390, util=97.97% 00:26:33.513 nvme3n1: ios=9270/0, merge=0/0, ticks=1210007/0, in_queue=1210007, util=98.05% 00:26:33.513 nvme4n1: ios=5281/0, merge=0/0, ticks=1241463/0, in_queue=1241463, util=98.36% 00:26:33.513 nvme5n1: ios=4787/0, merge=0/0, ticks=1219640/0, in_queue=1219640, util=98.50% 00:26:33.513 nvme6n1: ios=6689/0, merge=0/0, ticks=1240150/0, in_queue=1240150, util=98.59% 00:26:33.513 nvme7n1: ios=11063/0, merge=0/0, ticks=1239540/0, in_queue=1239540, util=98.98% 00:26:33.513 nvme8n1: ios=5027/0, merge=0/0, ticks=1243859/0, in_queue=1243859, util=99.14% 00:26:33.513 nvme9n1: ios=8308/0, merge=0/0, ticks=1232689/0, in_queue=1232689, util=99.26% 00:26:33.513 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:33.513 [global] 00:26:33.513 thread=1 00:26:33.513 invalidate=1 00:26:33.513 rw=randwrite 00:26:33.513 time_based=1 00:26:33.513 runtime=10 00:26:33.513 ioengine=libaio 00:26:33.513 direct=1 00:26:33.513 bs=262144 00:26:33.513 iodepth=64 00:26:33.513 norandommap=1 00:26:33.513 numjobs=1 00:26:33.513 00:26:33.513 [job0] 00:26:33.513 filename=/dev/nvme0n1 00:26:33.513 [job1] 00:26:33.513 filename=/dev/nvme10n1 00:26:33.513 [job2] 00:26:33.513 filename=/dev/nvme1n1 00:26:33.513 [job3] 00:26:33.513 filename=/dev/nvme2n1 00:26:33.513 [job4] 00:26:33.513 filename=/dev/nvme3n1 00:26:33.513 [job5] 00:26:33.513 filename=/dev/nvme4n1 00:26:33.513 [job6] 00:26:33.513 filename=/dev/nvme5n1 00:26:33.513 [job7] 00:26:33.513 filename=/dev/nvme6n1 00:26:33.513 [job8] 00:26:33.513 filename=/dev/nvme7n1 00:26:33.513 [job9] 00:26:33.513 filename=/dev/nvme8n1 00:26:33.513 [job10] 00:26:33.513 filename=/dev/nvme9n1 00:26:33.513 Could not set queue depth (nvme0n1) 00:26:33.513 Could not set queue depth (nvme10n1) 00:26:33.513 Could not set queue depth (nvme1n1) 00:26:33.513 Could not set queue depth (nvme2n1) 00:26:33.513 Could not set queue depth (nvme3n1) 00:26:33.513 Could not set queue depth (nvme4n1) 00:26:33.513 Could not set queue depth (nvme5n1) 00:26:33.513 Could not set queue depth (nvme6n1) 00:26:33.513 Could not set queue depth (nvme7n1) 00:26:33.513 Could not set queue depth (nvme8n1) 00:26:33.513 Could not set queue depth (nvme9n1) 00:26:33.513 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.513 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.513 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.513 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.513 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.513 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.513 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.514 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.514 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.514 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.514 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:33.514 fio-3.35 00:26:33.514 Starting 11 threads 00:26:43.495 00:26:43.495 job0: (groupid=0, jobs=1): err= 0: pid=1664626: Sun Oct 13 01:37:28 2024 00:26:43.495 write: IOPS=355, BW=89.0MiB/s (93.3MB/s)(911MiB/10240msec); 0 zone resets 00:26:43.495 slat (usec): min=19, max=93951, avg=1967.13, stdev=5678.03 00:26:43.495 clat (usec): min=1373, max=582990, avg=177648.12, stdev=109963.62 00:26:43.495 lat (usec): min=1446, max=583019, avg=179615.25, stdev=111385.36 00:26:43.495 clat percentiles (msec): 00:26:43.495 | 1.00th=[ 5], 5.00th=[ 17], 10.00th=[ 27], 20.00th=[ 66], 00:26:43.495 | 30.00th=[ 110], 40.00th=[ 140], 50.00th=[ 182], 60.00th=[ 209], 00:26:43.495 | 70.00th=[ 236], 80.00th=[ 275], 90.00th=[ 317], 95.00th=[ 368], 00:26:43.495 | 99.00th=[ 447], 99.50th=[ 493], 99.90th=[ 567], 99.95th=[ 584], 00:26:43.495 | 99.99th=[ 584] 00:26:43.495 bw ( KiB/s): min=45056, max=185344, per=9.02%, avg=91705.70, stdev=42697.21, samples=20 00:26:43.495 iops : min= 176, max= 724, avg=358.20, stdev=166.80, samples=20 00:26:43.495 lat (msec) : 2=0.05%, 4=0.69%, 10=2.39%, 20=3.59%, 50=9.71% 00:26:43.495 lat (msec) : 100=10.84%, 250=46.39%, 500=25.84%, 750=0.49% 00:26:43.495 cpu : usr=1.05%, sys=1.33%, ctx=2073, majf=0, minf=1 00:26:43.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:43.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.495 issued rwts: total=0,3645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.495 job1: (groupid=0, jobs=1): err= 0: pid=1664638: Sun Oct 13 01:37:28 2024 00:26:43.495 write: IOPS=474, BW=119MiB/s (124MB/s)(1214MiB/10235msec); 0 zone resets 00:26:43.495 slat (usec): min=21, max=89265, avg=1284.02, stdev=3846.80 00:26:43.495 clat (usec): min=720, max=567607, avg=133554.67, stdev=96038.05 00:26:43.495 lat (usec): min=747, max=571128, avg=134838.69, stdev=96744.91 00:26:43.495 clat percentiles (usec): 00:26:43.495 | 1.00th=[ 1942], 5.00th=[ 19530], 10.00th=[ 38536], 20.00th=[ 49546], 00:26:43.495 | 30.00th=[ 69731], 40.00th=[ 91751], 50.00th=[109577], 60.00th=[128451], 00:26:43.495 | 70.00th=[168821], 80.00th=[227541], 90.00th=[254804], 95.00th=[304088], 00:26:43.495 | 99.00th=[476054], 99.50th=[517997], 99.90th=[549454], 99.95th=[557843], 00:26:43.495 | 99.99th=[566232] 00:26:43.495 bw ( KiB/s): min=59392, max=229888, per=12.07%, avg=122682.50, stdev=58308.14, samples=20 00:26:43.495 iops : min= 232, max= 898, avg=479.20, stdev=227.79, samples=20 00:26:43.495 lat (usec) : 750=0.04%, 1000=0.21% 00:26:43.495 lat (msec) : 2=0.76%, 4=0.19%, 10=0.74%, 20=3.32%, 50=14.81% 00:26:43.495 lat (msec) : 100=23.23%, 250=45.17%, 500=10.88%, 750=0.66% 00:26:43.495 cpu : usr=1.53%, sys=1.55%, ctx=2682, majf=0, minf=1 00:26:43.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:43.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.495 issued rwts: total=0,4855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.495 job2: (groupid=0, jobs=1): err= 0: pid=1664639: Sun Oct 13 01:37:28 2024 00:26:43.495 write: IOPS=261, BW=65.4MiB/s (68.5MB/s)(670MiB/10244msec); 0 zone resets 00:26:43.495 slat (usec): min=24, max=65560, avg=3406.69, stdev=7525.77 00:26:43.495 clat (msec): min=2, max=600, avg=241.16, stdev=120.19 00:26:43.495 lat (msec): min=3, max=600, avg=244.56, stdev=121.47 00:26:43.495 clat percentiles (msec): 00:26:43.495 | 1.00th=[ 33], 5.00th=[ 102], 10.00th=[ 128], 20.00th=[ 150], 00:26:43.495 | 30.00th=[ 163], 40.00th=[ 182], 50.00th=[ 209], 60.00th=[ 239], 00:26:43.495 | 70.00th=[ 271], 80.00th=[ 338], 90.00th=[ 418], 95.00th=[ 502], 00:26:43.495 | 99.00th=[ 575], 99.50th=[ 584], 99.90th=[ 600], 99.95th=[ 600], 00:26:43.495 | 99.99th=[ 600] 00:26:43.495 bw ( KiB/s): min=25600, max=113152, per=6.59%, avg=66929.00, stdev=27948.17, samples=20 00:26:43.495 iops : min= 100, max= 442, avg=261.40, stdev=109.11, samples=20 00:26:43.495 lat (msec) : 4=0.07%, 10=0.15%, 20=0.19%, 50=1.49%, 100=2.95% 00:26:43.495 lat (msec) : 250=58.89%, 500=31.14%, 750=5.12% 00:26:43.495 cpu : usr=0.96%, sys=0.66%, ctx=874, majf=0, minf=1 00:26:43.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:43.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.495 issued rwts: total=0,2678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.495 job3: (groupid=0, jobs=1): err= 0: pid=1664640: Sun Oct 13 01:37:28 2024 00:26:43.495 write: IOPS=354, BW=88.5MiB/s (92.8MB/s)(905MiB/10224msec); 0 zone resets 00:26:43.495 slat (usec): min=19, max=200152, avg=1676.39, stdev=7560.61 00:26:43.495 clat (msec): min=4, max=611, avg=178.91, stdev=124.76 00:26:43.495 lat (msec): min=4, max=611, avg=180.59, stdev=126.19 00:26:43.495 clat percentiles (msec): 00:26:43.495 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 33], 20.00th=[ 70], 00:26:43.495 | 30.00th=[ 106], 40.00th=[ 125], 50.00th=[ 144], 60.00th=[ 184], 00:26:43.495 | 70.00th=[ 230], 80.00th=[ 279], 90.00th=[ 372], 95.00th=[ 409], 00:26:43.495 | 99.00th=[ 531], 99.50th=[ 550], 99.90th=[ 592], 99.95th=[ 592], 00:26:43.495 | 99.99th=[ 609] 00:26:43.495 bw ( KiB/s): min=29696, max=239104, per=8.96%, avg=91074.65, stdev=48195.14, samples=20 00:26:43.495 iops : min= 116, max= 934, avg=355.75, stdev=188.25, samples=20 00:26:43.495 lat (msec) : 10=1.63%, 20=3.92%, 50=9.25%, 100=13.09%, 250=45.00% 00:26:43.495 lat (msec) : 500=25.36%, 750=1.74% 00:26:43.495 cpu : usr=1.15%, sys=1.32%, ctx=2246, majf=0, minf=1 00:26:43.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:43.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.495 issued rwts: total=0,3620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.495 job4: (groupid=0, jobs=1): err= 0: pid=1664641: Sun Oct 13 01:37:28 2024 00:26:43.495 write: IOPS=325, BW=81.4MiB/s (85.3MB/s)(830MiB/10196msec); 0 zone resets 00:26:43.495 slat (usec): min=24, max=54183, avg=2701.42, stdev=6614.29 00:26:43.495 clat (msec): min=4, max=586, avg=193.81, stdev=138.17 00:26:43.495 lat (msec): min=4, max=586, avg=196.51, stdev=139.97 00:26:43.495 clat percentiles (msec): 00:26:43.495 | 1.00th=[ 31], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 54], 00:26:43.495 | 30.00th=[ 109], 40.00th=[ 140], 50.00th=[ 161], 60.00th=[ 188], 00:26:43.495 | 70.00th=[ 245], 80.00th=[ 317], 90.00th=[ 405], 95.00th=[ 464], 00:26:43.495 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 584], 00:26:43.495 | 99.99th=[ 584] 00:26:43.495 bw ( KiB/s): min=26624, max=280576, per=8.20%, avg=83317.30, stdev=66682.52, samples=20 00:26:43.495 iops : min= 104, max= 1096, avg=325.45, stdev=260.48, samples=20 00:26:43.495 lat (msec) : 10=0.03%, 20=0.12%, 50=17.57%, 100=11.72%, 250=41.61% 00:26:43.495 lat (msec) : 500=25.58%, 750=3.37% 00:26:43.495 cpu : usr=0.95%, sys=1.01%, ctx=1215, majf=0, minf=2 00:26:43.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:43.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.495 issued rwts: total=0,3319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.495 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.495 job5: (groupid=0, jobs=1): err= 0: pid=1664643: Sun Oct 13 01:37:28 2024 00:26:43.495 write: IOPS=618, BW=155MiB/s (162MB/s)(1553MiB/10044msec); 0 zone resets 00:26:43.495 slat (usec): min=17, max=43228, avg=1065.29, stdev=3236.77 00:26:43.495 clat (usec): min=896, max=593307, avg=102367.75, stdev=98697.98 00:26:43.495 lat (usec): min=953, max=602250, avg=103433.04, stdev=99389.28 00:26:43.495 clat percentiles (msec): 00:26:43.495 | 1.00th=[ 3], 5.00th=[ 20], 10.00th=[ 37], 20.00th=[ 44], 00:26:43.495 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 57], 60.00th=[ 78], 00:26:43.495 | 70.00th=[ 107], 80.00th=[ 161], 90.00th=[ 241], 95.00th=[ 300], 00:26:43.495 | 99.00th=[ 485], 99.50th=[ 514], 99.90th=[ 584], 99.95th=[ 584], 00:26:43.495 | 99.99th=[ 592] 00:26:43.495 bw ( KiB/s): min=36864, max=336896, per=15.49%, avg=157449.30, stdev=97087.91, samples=20 00:26:43.496 iops : min= 144, max= 1316, avg=615.00, stdev=379.27, samples=20 00:26:43.496 lat (usec) : 1000=0.06% 00:26:43.496 lat (msec) : 2=0.53%, 4=0.77%, 10=1.95%, 20=1.92%, 50=39.21% 00:26:43.496 lat (msec) : 100=23.16%, 250=23.43%, 500=8.18%, 750=0.79% 00:26:43.496 cpu : usr=1.95%, sys=2.00%, ctx=2931, majf=0, minf=1 00:26:43.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:43.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.496 issued rwts: total=0,6213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.496 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.496 job6: (groupid=0, jobs=1): err= 0: pid=1664644: Sun Oct 13 01:37:28 2024 00:26:43.496 write: IOPS=261, BW=65.5MiB/s (68.7MB/s)(667MiB/10190msec); 0 zone resets 00:26:43.496 slat (usec): min=26, max=101419, avg=2788.29, stdev=7360.23 00:26:43.496 clat (usec): min=1557, max=625746, avg=241428.71, stdev=134716.41 00:26:43.496 lat (usec): min=1641, max=633138, avg=244217.00, stdev=136327.58 00:26:43.496 clat percentiles (msec): 00:26:43.496 | 1.00th=[ 4], 5.00th=[ 24], 10.00th=[ 58], 20.00th=[ 133], 00:26:43.496 | 30.00th=[ 165], 40.00th=[ 194], 50.00th=[ 228], 60.00th=[ 268], 00:26:43.496 | 70.00th=[ 305], 80.00th=[ 372], 90.00th=[ 435], 95.00th=[ 464], 00:26:43.496 | 99.00th=[ 584], 99.50th=[ 609], 99.90th=[ 625], 99.95th=[ 625], 00:26:43.496 | 99.99th=[ 625] 00:26:43.496 bw ( KiB/s): min=32768, max=125952, per=6.56%, avg=66703.95, stdev=26346.56, samples=20 00:26:43.496 iops : min= 128, max= 492, avg=260.55, stdev=102.90, samples=20 00:26:43.496 lat (msec) : 2=0.11%, 4=1.24%, 10=0.34%, 20=2.96%, 50=4.12% 00:26:43.496 lat (msec) : 100=6.41%, 250=39.57%, 500=42.26%, 750=3.00% 00:26:43.496 cpu : usr=0.87%, sys=0.95%, ctx=1393, majf=0, minf=1 00:26:43.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:43.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.496 issued rwts: total=0,2669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.496 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.496 job7: (groupid=0, jobs=1): err= 0: pid=1664645: Sun Oct 13 01:37:28 2024 00:26:43.496 write: IOPS=297, BW=74.4MiB/s (78.0MB/s)(762MiB/10238msec); 0 zone resets 00:26:43.496 slat (usec): min=19, max=100239, avg=2233.31, stdev=6608.07 00:26:43.496 clat (usec): min=1083, max=647090, avg=212671.53, stdev=139283.12 00:26:43.496 lat (usec): min=1701, max=647174, avg=214904.84, stdev=140775.62 00:26:43.496 clat percentiles (msec): 00:26:43.496 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 87], 00:26:43.496 | 30.00th=[ 153], 40.00th=[ 174], 50.00th=[ 192], 60.00th=[ 215], 00:26:43.496 | 70.00th=[ 264], 80.00th=[ 338], 90.00th=[ 418], 95.00th=[ 472], 00:26:43.496 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 575], 99.95th=[ 642], 00:26:43.496 | 99.99th=[ 651] 00:26:43.496 bw ( KiB/s): min=28672, max=194048, per=7.52%, avg=76364.80, stdev=39792.12, samples=20 00:26:43.496 iops : min= 112, max= 758, avg=298.30, stdev=155.44, samples=20 00:26:43.496 lat (msec) : 2=0.07%, 4=0.69%, 10=3.18%, 20=6.07%, 50=6.89% 00:26:43.496 lat (msec) : 100=4.73%, 250=45.39%, 500=29.70%, 750=3.28% 00:26:43.496 cpu : usr=1.12%, sys=0.89%, ctx=1800, majf=0, minf=1 00:26:43.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:43.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.496 issued rwts: total=0,3047,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.496 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.496 job8: (groupid=0, jobs=1): err= 0: pid=1664651: Sun Oct 13 01:37:28 2024 00:26:43.496 write: IOPS=371, BW=92.8MiB/s (97.3MB/s)(951MiB/10242msec); 0 zone resets 00:26:43.496 slat (usec): min=18, max=134178, avg=1599.05, stdev=5526.73 00:26:43.496 clat (usec): min=1141, max=520151, avg=170598.90, stdev=114055.51 00:26:43.496 lat (usec): min=1238, max=525243, avg=172197.95, stdev=115283.53 00:26:43.496 clat percentiles (msec): 00:26:43.496 | 1.00th=[ 8], 5.00th=[ 15], 10.00th=[ 30], 20.00th=[ 51], 00:26:43.496 | 30.00th=[ 85], 40.00th=[ 136], 50.00th=[ 176], 60.00th=[ 207], 00:26:43.496 | 70.00th=[ 234], 80.00th=[ 251], 90.00th=[ 292], 95.00th=[ 409], 00:26:43.496 | 99.00th=[ 493], 99.50th=[ 502], 99.90th=[ 514], 99.95th=[ 514], 00:26:43.496 | 99.99th=[ 523] 00:26:43.496 bw ( KiB/s): min=46592, max=148992, per=9.42%, avg=95750.10, stdev=28509.77, samples=20 00:26:43.496 iops : min= 182, max= 582, avg=374.00, stdev=111.40, samples=20 00:26:43.496 lat (msec) : 2=0.05%, 4=0.24%, 10=2.21%, 20=4.65%, 50=12.78% 00:26:43.496 lat (msec) : 100=13.88%, 250=45.52%, 500=20.01%, 750=0.66% 00:26:43.496 cpu : usr=1.17%, sys=1.38%, ctx=2490, majf=0, minf=1 00:26:43.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:43.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.496 issued rwts: total=0,3803,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.496 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.496 job9: (groupid=0, jobs=1): err= 0: pid=1664655: Sun Oct 13 01:37:28 2024 00:26:43.496 write: IOPS=340, BW=85.0MiB/s (89.2MB/s)(867MiB/10197msec); 0 zone resets 00:26:43.496 slat (usec): min=16, max=136586, avg=2090.01, stdev=7308.12 00:26:43.496 clat (msec): min=3, max=611, avg=185.99, stdev=117.49 00:26:43.496 lat (msec): min=3, max=611, avg=188.08, stdev=119.08 00:26:43.496 clat percentiles (msec): 00:26:43.496 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 52], 20.00th=[ 70], 00:26:43.496 | 30.00th=[ 110], 40.00th=[ 131], 50.00th=[ 171], 60.00th=[ 224], 00:26:43.496 | 70.00th=[ 239], 80.00th=[ 257], 90.00th=[ 368], 95.00th=[ 422], 00:26:43.496 | 99.00th=[ 481], 99.50th=[ 535], 99.90th=[ 592], 99.95th=[ 600], 00:26:43.496 | 99.99th=[ 609] 00:26:43.496 bw ( KiB/s): min=34816, max=230400, per=8.58%, avg=87168.00, stdev=43988.70, samples=20 00:26:43.496 iops : min= 136, max= 900, avg=340.50, stdev=171.83, samples=20 00:26:43.496 lat (msec) : 4=0.03%, 10=0.81%, 20=1.87%, 50=6.31%, 100=19.43% 00:26:43.496 lat (msec) : 250=46.97%, 500=23.70%, 750=0.87% 00:26:43.496 cpu : usr=1.13%, sys=1.12%, ctx=1747, majf=0, minf=1 00:26:43.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:43.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.496 issued rwts: total=0,3468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.496 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.496 job10: (groupid=0, jobs=1): err= 0: pid=1664656: Sun Oct 13 01:37:28 2024 00:26:43.496 write: IOPS=328, BW=82.0MiB/s (86.0MB/s)(836MiB/10197msec); 0 zone resets 00:26:43.496 slat (usec): min=24, max=43805, avg=2812.08, stdev=6537.21 00:26:43.496 clat (msec): min=16, max=541, avg=192.12, stdev=129.32 00:26:43.496 lat (msec): min=16, max=541, avg=194.93, stdev=131.15 00:26:43.496 clat percentiles (msec): 00:26:43.496 | 1.00th=[ 44], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 49], 00:26:43.496 | 30.00th=[ 86], 40.00th=[ 138], 50.00th=[ 171], 60.00th=[ 207], 00:26:43.496 | 70.00th=[ 266], 80.00th=[ 313], 90.00th=[ 397], 95.00th=[ 426], 00:26:43.496 | 99.00th=[ 493], 99.50th=[ 506], 99.90th=[ 518], 99.95th=[ 542], 00:26:43.496 | 99.99th=[ 542] 00:26:43.496 bw ( KiB/s): min=34816, max=331264, per=8.27%, avg=84002.95, stdev=70202.19, samples=20 00:26:43.496 iops : min= 136, max= 1294, avg=328.10, stdev=274.22, samples=20 00:26:43.496 lat (msec) : 20=0.06%, 50=22.75%, 100=10.16%, 250=34.08%, 500=32.23% 00:26:43.496 lat (msec) : 750=0.72% 00:26:43.496 cpu : usr=1.08%, sys=1.03%, ctx=1010, majf=0, minf=1 00:26:43.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:43.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:43.496 issued rwts: total=0,3345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.496 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:43.496 00:26:43.496 Run status group 0 (all jobs): 00:26:43.496 WRITE: bw=992MiB/s (1041MB/s), 65.4MiB/s-155MiB/s (68.5MB/s-162MB/s), io=9.93GiB (10.7GB), run=10044-10244msec 00:26:43.496 00:26:43.496 Disk stats (read/write): 00:26:43.496 nvme0n1: ios=47/7248, merge=0/0, ticks=2373/1238739, in_queue=1241112, util=99.96% 00:26:43.496 nvme10n1: ios=41/9665, merge=0/0, ticks=38/1247958, in_queue=1247996, util=97.55% 00:26:43.496 nvme1n1: ios=42/5309, merge=0/0, ticks=1350/1228865, in_queue=1230215, util=100.00% 00:26:43.496 nvme2n1: ios=47/7211, merge=0/0, ticks=1760/1223209, in_queue=1224969, util=100.00% 00:26:43.496 nvme3n1: ios=0/6623, merge=0/0, ticks=0/1238578, in_queue=1238578, util=97.89% 00:26:43.496 nvme4n1: ios=0/12203, merge=0/0, ticks=0/1225308, in_queue=1225308, util=98.14% 00:26:43.496 nvme5n1: ios=0/5331, merge=0/0, ticks=0/1248038, in_queue=1248038, util=98.33% 00:26:43.496 nvme6n1: ios=0/6045, merge=0/0, ticks=0/1241892, in_queue=1241892, util=98.38% 00:26:43.496 nvme7n1: ios=39/7561, merge=0/0, ticks=961/1245557, in_queue=1246518, util=100.00% 00:26:43.496 nvme8n1: ios=44/6922, merge=0/0, ticks=2271/1222859, in_queue=1225130, util=100.00% 00:26:43.496 nvme9n1: ios=40/6672, merge=0/0, ticks=2049/1236304, in_queue=1238353, util=100.00% 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:43.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.496 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:43.496 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.497 01:37:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:43.755 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:43.755 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:44.013 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:44.013 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.013 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:26:44.271 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.271 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:26:44.271 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:44.272 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.272 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:44.530 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.530 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:44.788 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.788 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:45.046 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:45.046 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.046 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:45.305 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:45.305 rmmod nvme_tcp 00:26:45.305 rmmod nvme_fabrics 00:26:45.305 rmmod nvme_keyring 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@515 -- # '[' -n 1659628 ']' 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # killprocess 1659628 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 1659628 ']' 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 1659628 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1659628 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1659628' 00:26:45.305 killing process with pid 1659628 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 1659628 00:26:45.305 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 1659628 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-save 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@789 -- # iptables-restore 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.871 01:37:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.777 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:47.777 00:26:47.777 real 1m0.879s 00:26:47.777 user 3m32.372s 00:26:47.777 sys 0m17.618s 00:26:47.777 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.777 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:47.777 ************************************ 00:26:47.777 END TEST nvmf_multiconnection 00:26:47.777 ************************************ 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:48.036 ************************************ 00:26:48.036 START TEST nvmf_initiator_timeout 00:26:48.036 ************************************ 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:48.036 * Looking for test storage... 00:26:48.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.036 --rc genhtml_branch_coverage=1 00:26:48.036 --rc genhtml_function_coverage=1 00:26:48.036 --rc genhtml_legend=1 00:26:48.036 --rc geninfo_all_blocks=1 00:26:48.036 --rc geninfo_unexecuted_blocks=1 00:26:48.036 00:26:48.036 ' 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.036 --rc genhtml_branch_coverage=1 00:26:48.036 --rc genhtml_function_coverage=1 00:26:48.036 --rc genhtml_legend=1 00:26:48.036 --rc geninfo_all_blocks=1 00:26:48.036 --rc geninfo_unexecuted_blocks=1 00:26:48.036 00:26:48.036 ' 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.036 --rc genhtml_branch_coverage=1 00:26:48.036 --rc genhtml_function_coverage=1 00:26:48.036 --rc genhtml_legend=1 00:26:48.036 --rc geninfo_all_blocks=1 00:26:48.036 --rc geninfo_unexecuted_blocks=1 00:26:48.036 00:26:48.036 ' 00:26:48.036 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:48.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:48.036 --rc genhtml_branch_coverage=1 00:26:48.036 --rc genhtml_function_coverage=1 00:26:48.036 --rc genhtml_legend=1 00:26:48.036 --rc geninfo_all_blocks=1 00:26:48.036 --rc geninfo_unexecuted_blocks=1 00:26:48.036 00:26:48.036 ' 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:48.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:48.037 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:50.569 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:50.569 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:50.569 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:50.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:50.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # is_hw=yes 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:50.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:26:50.570 00:26:50.570 --- 10.0.0.2 ping statistics --- 00:26:50.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.570 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:50.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:26:50.570 00:26:50.570 --- 10.0.0.1 ping statistics --- 00:26:50.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.570 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # return 0 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # nvmfpid=1667846 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # waitforlisten 1667846 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 1667846 ']' 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.570 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.570 [2024-10-13 01:37:35.762070] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:26:50.570 [2024-10-13 01:37:35.762158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.570 [2024-10-13 01:37:35.830163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:50.570 [2024-10-13 01:37:35.882969] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.570 [2024-10-13 01:37:35.883024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.570 [2024-10-13 01:37:35.883040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.570 [2024-10-13 01:37:35.883053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.570 [2024-10-13 01:37:35.883065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.570 [2024-10-13 01:37:35.884874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.570 [2024-10-13 01:37:35.884896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:50.570 [2024-10-13 01:37:35.884977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:50.570 [2024-10-13 01:37:35.884979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.570 Malloc0 00:26:50.570 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.571 Delay0 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.571 [2024-10-13 01:37:36.075975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.571 [2024-10-13 01:37:36.104264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.571 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:51.504 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:51.504 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:26:51.504 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:51.504 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:51.504 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:26:53.402 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:53.402 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:53.402 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:26:53.402 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:53.402 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:53.402 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:26:53.402 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1668271 00:26:53.402 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:53.402 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:53.402 [global] 00:26:53.402 thread=1 00:26:53.402 invalidate=1 00:26:53.402 rw=write 00:26:53.402 time_based=1 00:26:53.402 runtime=60 00:26:53.402 ioengine=libaio 00:26:53.402 direct=1 00:26:53.402 bs=4096 00:26:53.402 iodepth=1 00:26:53.402 norandommap=0 00:26:53.402 numjobs=1 00:26:53.402 00:26:53.402 verify_dump=1 00:26:53.402 verify_backlog=512 00:26:53.402 verify_state_save=0 00:26:53.402 do_verify=1 00:26:53.402 verify=crc32c-intel 00:26:53.402 [job0] 00:26:53.402 filename=/dev/nvme0n1 00:26:53.402 Could not set queue depth (nvme0n1) 00:26:53.402 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:53.402 fio-3.35 00:26:53.402 Starting 1 thread 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.680 true 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.680 true 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.680 true 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:56.680 true 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.680 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:59.206 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:59.206 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.206 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.206 true 00:26:59.206 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.206 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:59.206 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.206 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.464 true 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.464 true 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:59.464 true 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:59.464 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1668271 00:27:55.731 00:27:55.731 job0: (groupid=0, jobs=1): err= 0: pid=1668340: Sun Oct 13 01:38:39 2024 00:27:55.731 read: IOPS=92, BW=370KiB/s (379kB/s)(21.7MiB/60023msec) 00:27:55.731 slat (usec): min=4, max=16897, avg=15.54, stdev=226.69 00:27:55.731 clat (usec): min=202, max=41028k, avg=10561.27, stdev=550579.22 00:27:55.731 lat (usec): min=208, max=41028k, avg=10576.81, stdev=550579.41 00:27:55.731 clat percentiles (usec): 00:27:55.731 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 00:27:55.731 | 20.00th=[ 241], 30.00th=[ 245], 40.00th=[ 253], 00:27:55.731 | 50.00th=[ 260], 60.00th=[ 269], 70.00th=[ 277], 00:27:55.731 | 80.00th=[ 297], 90.00th=[ 338], 95.00th=[ 41157], 00:27:55.731 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41157], 00:27:55.731 | 99.95th=[ 43779], 99.99th=[17112761] 00:27:55.731 write: IOPS=93, BW=375KiB/s (384kB/s)(22.0MiB/60023msec); 0 zone resets 00:27:55.731 slat (nsec): min=6281, max=60336, avg=12499.84, stdev=6982.15 00:27:55.731 clat (usec): min=157, max=426, avg=206.84, stdev=31.77 00:27:55.731 lat (usec): min=170, max=452, avg=219.34, stdev=36.18 00:27:55.731 clat percentiles (usec): 00:27:55.731 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:27:55.731 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:27:55.731 | 70.00th=[ 215], 80.00th=[ 227], 90.00th=[ 253], 95.00th=[ 265], 00:27:55.731 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 404], 99.95th=[ 424], 00:27:55.731 | 99.99th=[ 429] 00:27:55.732 bw ( KiB/s): min= 4096, max= 8336, per=100.00%, avg=7509.33, stdev=1676.52, samples=6 00:27:55.732 iops : min= 1024, max= 2084, avg=1877.33, stdev=419.13, samples=6 00:27:55.732 lat (usec) : 250=63.40%, 500=33.02%, 1000=0.01% 00:27:55.732 lat (msec) : 2=0.01%, 50=3.55%, >=2000=0.01% 00:27:55.732 cpu : usr=0.17%, sys=0.30%, ctx=11187, majf=0, minf=1 00:27:55.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.732 issued rwts: total=5554,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:55.732 00:27:55.732 Run status group 0 (all jobs): 00:27:55.732 READ: bw=370KiB/s (379kB/s), 370KiB/s-370KiB/s (379kB/s-379kB/s), io=21.7MiB (22.7MB), run=60023-60023msec 00:27:55.732 WRITE: bw=375KiB/s (384kB/s), 375KiB/s-375KiB/s (384kB/s-384kB/s), io=22.0MiB (23.1MB), run=60023-60023msec 00:27:55.732 00:27:55.732 Disk stats (read/write): 00:27:55.732 nvme0n1: ios=5649/5632, merge=0/0, ticks=17514/1124, in_queue=18638, util=99.81% 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:55.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:55.732 nvmf hotplug test: fio successful as expected 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:55.732 rmmod nvme_tcp 00:27:55.732 rmmod nvme_fabrics 00:27:55.732 rmmod nvme_keyring 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@515 -- # '[' -n 1667846 ']' 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # killprocess 1667846 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 1667846 ']' 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 1667846 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1667846 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1667846' 00:27:55.732 killing process with pid 1667846 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 1667846 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 1667846 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-save 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@789 -- # iptables-restore 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.732 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.300 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:56.300 00:27:56.300 real 1m8.219s 00:27:56.300 user 4m10.986s 00:27:56.300 sys 0m6.583s 00:27:56.300 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:56.300 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:56.300 ************************************ 00:27:56.300 END TEST nvmf_initiator_timeout 00:27:56.300 ************************************ 00:27:56.300 01:38:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:56.300 01:38:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:56.300 01:38:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:56.300 01:38:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:56.300 01:38:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.202 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:58.202 ************************************ 00:27:58.202 START TEST nvmf_perf_adq 00:27:58.202 ************************************ 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:58.202 * Looking for test storage... 00:27:58.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:58.202 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.203 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:58.461 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.461 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.461 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.461 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:58.461 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.461 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:58.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.461 --rc genhtml_branch_coverage=1 00:27:58.462 --rc genhtml_function_coverage=1 00:27:58.462 --rc genhtml_legend=1 00:27:58.462 --rc geninfo_all_blocks=1 00:27:58.462 --rc geninfo_unexecuted_blocks=1 00:27:58.462 00:27:58.462 ' 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:58.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.462 --rc genhtml_branch_coverage=1 00:27:58.462 --rc genhtml_function_coverage=1 00:27:58.462 --rc genhtml_legend=1 00:27:58.462 --rc geninfo_all_blocks=1 00:27:58.462 --rc geninfo_unexecuted_blocks=1 00:27:58.462 00:27:58.462 ' 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:58.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.462 --rc genhtml_branch_coverage=1 00:27:58.462 --rc genhtml_function_coverage=1 00:27:58.462 --rc genhtml_legend=1 00:27:58.462 --rc geninfo_all_blocks=1 00:27:58.462 --rc geninfo_unexecuted_blocks=1 00:27:58.462 00:27:58.462 ' 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:58.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.462 --rc genhtml_branch_coverage=1 00:27:58.462 --rc genhtml_function_coverage=1 00:27:58.462 --rc genhtml_legend=1 00:27:58.462 --rc geninfo_all_blocks=1 00:27:58.462 --rc geninfo_unexecuted_blocks=1 00:27:58.462 00:27:58.462 ' 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:58.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.462 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.362 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:00.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:00.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:00.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:00.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:00.363 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:00.929 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:03.483 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:08.757 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:08.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:08.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:08.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:08.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:08.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:28:08.758 00:28:08.758 --- 10.0.0.2 ping statistics --- 00:28:08.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.758 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:28:08.758 00:28:08.758 --- 10.0.0.1 ping statistics --- 00:28:08.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.758 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1679981 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1679981 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1679981 ']' 00:28:08.758 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.759 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:08.759 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.759 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:08.759 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.759 [2024-10-13 01:38:53.901437] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:28:08.759 [2024-10-13 01:38:53.901540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.759 [2024-10-13 01:38:53.971616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.759 [2024-10-13 01:38:54.023436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.759 [2024-10-13 01:38:54.023513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.759 [2024-10-13 01:38:54.023531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.759 [2024-10-13 01:38:54.023544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.759 [2024-10-13 01:38:54.023555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.759 [2024-10-13 01:38:54.025200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.759 [2024-10-13 01:38:54.025229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.759 [2024-10-13 01:38:54.025284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.759 [2024-10-13 01:38:54.025288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.759 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:08.759 [2024-10-13 01:38:54.327447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.017 Malloc1 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.017 [2024-10-13 01:38:54.392057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1680012 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:09.017 01:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:10.917 "tick_rate": 2700000000, 00:28:10.917 "poll_groups": [ 00:28:10.917 { 00:28:10.917 "name": "nvmf_tgt_poll_group_000", 00:28:10.917 "admin_qpairs": 1, 00:28:10.917 "io_qpairs": 1, 00:28:10.917 "current_admin_qpairs": 1, 00:28:10.917 "current_io_qpairs": 1, 00:28:10.917 "pending_bdev_io": 0, 00:28:10.917 "completed_nvme_io": 19621, 00:28:10.917 "transports": [ 00:28:10.917 { 00:28:10.917 "trtype": "TCP" 00:28:10.917 } 00:28:10.917 ] 00:28:10.917 }, 00:28:10.917 { 00:28:10.917 "name": "nvmf_tgt_poll_group_001", 00:28:10.917 "admin_qpairs": 0, 00:28:10.917 "io_qpairs": 1, 00:28:10.917 "current_admin_qpairs": 0, 00:28:10.917 "current_io_qpairs": 1, 00:28:10.917 "pending_bdev_io": 0, 00:28:10.917 "completed_nvme_io": 19471, 00:28:10.917 "transports": [ 00:28:10.917 { 00:28:10.917 "trtype": "TCP" 00:28:10.917 } 00:28:10.917 ] 00:28:10.917 }, 00:28:10.917 { 00:28:10.917 "name": "nvmf_tgt_poll_group_002", 00:28:10.917 "admin_qpairs": 0, 00:28:10.917 "io_qpairs": 1, 00:28:10.917 "current_admin_qpairs": 0, 00:28:10.917 "current_io_qpairs": 1, 00:28:10.917 "pending_bdev_io": 0, 00:28:10.917 "completed_nvme_io": 18407, 00:28:10.917 "transports": [ 00:28:10.917 { 00:28:10.917 "trtype": "TCP" 00:28:10.917 } 00:28:10.917 ] 00:28:10.917 }, 00:28:10.917 { 00:28:10.917 "name": "nvmf_tgt_poll_group_003", 00:28:10.917 "admin_qpairs": 0, 00:28:10.917 "io_qpairs": 1, 00:28:10.917 "current_admin_qpairs": 0, 00:28:10.917 "current_io_qpairs": 1, 00:28:10.917 "pending_bdev_io": 0, 00:28:10.917 "completed_nvme_io": 19804, 00:28:10.917 "transports": [ 00:28:10.917 { 00:28:10.917 "trtype": "TCP" 00:28:10.917 } 00:28:10.917 ] 00:28:10.917 } 00:28:10.917 ] 00:28:10.917 }' 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:10.917 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1680012 00:28:19.023 Initializing NVMe Controllers 00:28:19.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:19.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:19.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:19.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:19.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:19.024 Initialization complete. Launching workers. 00:28:19.024 ======================================================== 00:28:19.024 Latency(us) 00:28:19.024 Device Information : IOPS MiB/s Average min max 00:28:19.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9731.40 38.01 6576.55 2535.75 11244.60 00:28:19.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10253.60 40.05 6243.21 1802.50 10615.54 00:28:19.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10257.20 40.07 6241.01 2037.80 10476.73 00:28:19.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10283.00 40.17 6224.40 2234.47 10916.55 00:28:19.024 ======================================================== 00:28:19.024 Total : 40525.20 158.30 6317.92 1802.50 11244.60 00:28:19.024 00:28:19.024 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:19.024 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:19.024 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:19.024 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.024 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:19.024 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.024 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.024 rmmod nvme_tcp 00:28:19.024 rmmod nvme_fabrics 00:28:19.024 rmmod nvme_keyring 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1679981 ']' 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1679981 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1679981 ']' 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1679981 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1679981 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1679981' 00:28:19.282 killing process with pid 1679981 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1679981 00:28:19.282 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1679981 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.540 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.441 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:21.441 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:21.441 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:21.441 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:22.376 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:24.903 01:39:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:30.169 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:30.169 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:30.169 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:30.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:30.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:30.170 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:30.170 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:28:30.170 00:28:30.170 --- 10.0.0.2 ping statistics --- 00:28:30.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.170 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:28:30.170 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:28:30.171 00:28:30.171 --- 10.0.0.1 ping statistics --- 00:28:30.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.171 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:30.171 net.core.busy_poll = 1 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:30.171 net.core.busy_read = 1 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1683380 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1683380 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1683380 ']' 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.171 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.171 [2024-10-13 01:39:15.549384] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:28:30.171 [2024-10-13 01:39:15.549477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.171 [2024-10-13 01:39:15.615119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:30.171 [2024-10-13 01:39:15.661244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.171 [2024-10-13 01:39:15.661297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.171 [2024-10-13 01:39:15.661321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.171 [2024-10-13 01:39:15.661331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.171 [2024-10-13 01:39:15.661340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.171 [2024-10-13 01:39:15.662748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.171 [2024-10-13 01:39:15.662782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.171 [2024-10-13 01:39:15.662840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.171 [2024-10-13 01:39:15.662843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.429 [2024-10-13 01:39:15.955384] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.429 Malloc1 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.429 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.429 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.429 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:30.429 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.429 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.688 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.688 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.688 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.688 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:30.688 [2024-10-13 01:39:16.017327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.688 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.688 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1683411 00:28:30.688 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:30.688 01:39:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:32.585 "tick_rate": 2700000000, 00:28:32.585 "poll_groups": [ 00:28:32.585 { 00:28:32.585 "name": "nvmf_tgt_poll_group_000", 00:28:32.585 "admin_qpairs": 1, 00:28:32.585 "io_qpairs": 2, 00:28:32.585 "current_admin_qpairs": 1, 00:28:32.585 "current_io_qpairs": 2, 00:28:32.585 "pending_bdev_io": 0, 00:28:32.585 "completed_nvme_io": 25178, 00:28:32.585 "transports": [ 00:28:32.585 { 00:28:32.585 "trtype": "TCP" 00:28:32.585 } 00:28:32.585 ] 00:28:32.585 }, 00:28:32.585 { 00:28:32.585 "name": "nvmf_tgt_poll_group_001", 00:28:32.585 "admin_qpairs": 0, 00:28:32.585 "io_qpairs": 2, 00:28:32.585 "current_admin_qpairs": 0, 00:28:32.585 "current_io_qpairs": 2, 00:28:32.585 "pending_bdev_io": 0, 00:28:32.585 "completed_nvme_io": 25948, 00:28:32.585 "transports": [ 00:28:32.585 { 00:28:32.585 "trtype": "TCP" 00:28:32.585 } 00:28:32.585 ] 00:28:32.585 }, 00:28:32.585 { 00:28:32.585 "name": "nvmf_tgt_poll_group_002", 00:28:32.585 "admin_qpairs": 0, 00:28:32.585 "io_qpairs": 0, 00:28:32.585 "current_admin_qpairs": 0, 00:28:32.585 "current_io_qpairs": 0, 00:28:32.585 "pending_bdev_io": 0, 00:28:32.585 "completed_nvme_io": 0, 00:28:32.585 "transports": [ 00:28:32.585 { 00:28:32.585 "trtype": "TCP" 00:28:32.585 } 00:28:32.585 ] 00:28:32.585 }, 00:28:32.585 { 00:28:32.585 "name": "nvmf_tgt_poll_group_003", 00:28:32.585 "admin_qpairs": 0, 00:28:32.585 "io_qpairs": 0, 00:28:32.585 "current_admin_qpairs": 0, 00:28:32.585 "current_io_qpairs": 0, 00:28:32.585 "pending_bdev_io": 0, 00:28:32.585 "completed_nvme_io": 0, 00:28:32.585 "transports": [ 00:28:32.585 { 00:28:32.585 "trtype": "TCP" 00:28:32.585 } 00:28:32.585 ] 00:28:32.585 } 00:28:32.585 ] 00:28:32.585 }' 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:32.585 01:39:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1683411 00:28:40.690 Initializing NVMe Controllers 00:28:40.690 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:40.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:40.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:40.690 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:40.690 Initialization complete. Launching workers. 00:28:40.690 ======================================================== 00:28:40.690 Latency(us) 00:28:40.690 Device Information : IOPS MiB/s Average min max 00:28:40.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6707.50 26.20 9575.95 1832.71 54126.93 00:28:40.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6784.70 26.50 9434.53 1858.04 55937.91 00:28:40.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5754.40 22.48 11123.73 1725.42 55957.58 00:28:40.690 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7744.70 30.25 8274.45 1643.28 55100.44 00:28:40.690 ======================================================== 00:28:40.690 Total : 26991.29 105.43 9496.94 1643.28 55957.58 00:28:40.690 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.690 rmmod nvme_tcp 00:28:40.690 rmmod nvme_fabrics 00:28:40.690 rmmod nvme_keyring 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1683380 ']' 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1683380 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1683380 ']' 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1683380 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:40.690 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1683380 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1683380' 00:28:40.948 killing process with pid 1683380 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1683380 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1683380 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.948 01:39:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.325 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:44.325 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:44.325 00:28:44.325 real 0m45.943s 00:28:44.325 user 2m40.496s 00:28:44.325 sys 0m9.115s 00:28:44.325 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:44.325 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.325 ************************************ 00:28:44.325 END TEST nvmf_perf_adq 00:28:44.325 ************************************ 00:28:44.325 01:39:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:44.325 01:39:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:44.325 01:39:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:44.325 01:39:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:44.326 ************************************ 00:28:44.326 START TEST nvmf_shutdown 00:28:44.326 ************************************ 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:44.326 * Looking for test storage... 00:28:44.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:44.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.326 --rc genhtml_branch_coverage=1 00:28:44.326 --rc genhtml_function_coverage=1 00:28:44.326 --rc genhtml_legend=1 00:28:44.326 --rc geninfo_all_blocks=1 00:28:44.326 --rc geninfo_unexecuted_blocks=1 00:28:44.326 00:28:44.326 ' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:44.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.326 --rc genhtml_branch_coverage=1 00:28:44.326 --rc genhtml_function_coverage=1 00:28:44.326 --rc genhtml_legend=1 00:28:44.326 --rc geninfo_all_blocks=1 00:28:44.326 --rc geninfo_unexecuted_blocks=1 00:28:44.326 00:28:44.326 ' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:44.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.326 --rc genhtml_branch_coverage=1 00:28:44.326 --rc genhtml_function_coverage=1 00:28:44.326 --rc genhtml_legend=1 00:28:44.326 --rc geninfo_all_blocks=1 00:28:44.326 --rc geninfo_unexecuted_blocks=1 00:28:44.326 00:28:44.326 ' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:44.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.326 --rc genhtml_branch_coverage=1 00:28:44.326 --rc genhtml_function_coverage=1 00:28:44.326 --rc genhtml_legend=1 00:28:44.326 --rc geninfo_all_blocks=1 00:28:44.326 --rc geninfo_unexecuted_blocks=1 00:28:44.326 00:28:44.326 ' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:44.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:44.326 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:44.327 ************************************ 00:28:44.327 START TEST nvmf_shutdown_tc1 00:28:44.327 ************************************ 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.327 01:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:46.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:46.857 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:46.857 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:46.857 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.857 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:46.858 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:46.858 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.858 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.858 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:46.858 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:46.858 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.858 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.858 01:39:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:46.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:28:46.858 00:28:46.858 --- 10.0.0.2 ping statistics --- 00:28:46.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.858 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:28:46.858 00:28:46.858 --- 10.0.0.1 ping statistics --- 00:28:46.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.858 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1686710 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1686710 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1686710 ']' 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.858 [2024-10-13 01:39:32.140506] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:28:46.858 [2024-10-13 01:39:32.140599] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.858 [2024-10-13 01:39:32.205411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:46.858 [2024-10-13 01:39:32.252895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.858 [2024-10-13 01:39:32.252947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.858 [2024-10-13 01:39:32.252975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.858 [2024-10-13 01:39:32.252986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.858 [2024-10-13 01:39:32.252995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.858 [2024-10-13 01:39:32.254590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:46.858 [2024-10-13 01:39:32.254654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:46.858 [2024-10-13 01:39:32.254702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:46.858 [2024-10-13 01:39:32.254706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.858 [2024-10-13 01:39:32.399029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:46.858 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.116 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.116 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.116 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.116 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.116 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:47.116 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:47.116 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:47.116 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.116 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.116 Malloc1 00:28:47.116 [2024-10-13 01:39:32.497174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.116 Malloc2 00:28:47.116 Malloc3 00:28:47.116 Malloc4 00:28:47.116 Malloc5 00:28:47.374 Malloc6 00:28:47.374 Malloc7 00:28:47.374 Malloc8 00:28:47.374 Malloc9 00:28:47.374 Malloc10 00:28:47.374 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.374 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:47.374 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:47.374 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1686890 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1686890 /var/tmp/bdevperf.sock 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1686890 ']' 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:47.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.632 { 00:28:47.632 "params": { 00:28:47.632 "name": "Nvme$subsystem", 00:28:47.632 "trtype": "$TEST_TRANSPORT", 00:28:47.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.632 "adrfam": "ipv4", 00:28:47.632 "trsvcid": "$NVMF_PORT", 00:28:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.632 "hdgst": ${hdgst:-false}, 00:28:47.632 "ddgst": ${ddgst:-false} 00:28:47.632 }, 00:28:47.632 "method": "bdev_nvme_attach_controller" 00:28:47.632 } 00:28:47.632 EOF 00:28:47.632 )") 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.632 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.632 { 00:28:47.632 "params": { 00:28:47.632 "name": "Nvme$subsystem", 00:28:47.632 "trtype": "$TEST_TRANSPORT", 00:28:47.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.632 "adrfam": "ipv4", 00:28:47.632 "trsvcid": "$NVMF_PORT", 00:28:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.632 "hdgst": ${hdgst:-false}, 00:28:47.632 "ddgst": ${ddgst:-false} 00:28:47.632 }, 00:28:47.632 "method": "bdev_nvme_attach_controller" 00:28:47.632 } 00:28:47.632 EOF 00:28:47.632 )") 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.633 { 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme$subsystem", 00:28:47.633 "trtype": "$TEST_TRANSPORT", 00:28:47.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "$NVMF_PORT", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.633 "hdgst": ${hdgst:-false}, 00:28:47.633 "ddgst": ${ddgst:-false} 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 } 00:28:47.633 EOF 00:28:47.633 )") 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.633 { 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme$subsystem", 00:28:47.633 "trtype": "$TEST_TRANSPORT", 00:28:47.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "$NVMF_PORT", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.633 "hdgst": ${hdgst:-false}, 00:28:47.633 "ddgst": ${ddgst:-false} 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 } 00:28:47.633 EOF 00:28:47.633 )") 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.633 { 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme$subsystem", 00:28:47.633 "trtype": "$TEST_TRANSPORT", 00:28:47.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "$NVMF_PORT", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.633 "hdgst": ${hdgst:-false}, 00:28:47.633 "ddgst": ${ddgst:-false} 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 } 00:28:47.633 EOF 00:28:47.633 )") 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.633 { 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme$subsystem", 00:28:47.633 "trtype": "$TEST_TRANSPORT", 00:28:47.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "$NVMF_PORT", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.633 "hdgst": ${hdgst:-false}, 00:28:47.633 "ddgst": ${ddgst:-false} 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 } 00:28:47.633 EOF 00:28:47.633 )") 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.633 { 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme$subsystem", 00:28:47.633 "trtype": "$TEST_TRANSPORT", 00:28:47.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "$NVMF_PORT", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.633 "hdgst": ${hdgst:-false}, 00:28:47.633 "ddgst": ${ddgst:-false} 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 } 00:28:47.633 EOF 00:28:47.633 )") 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.633 { 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme$subsystem", 00:28:47.633 "trtype": "$TEST_TRANSPORT", 00:28:47.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "$NVMF_PORT", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.633 "hdgst": ${hdgst:-false}, 00:28:47.633 "ddgst": ${ddgst:-false} 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 } 00:28:47.633 EOF 00:28:47.633 )") 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.633 { 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme$subsystem", 00:28:47.633 "trtype": "$TEST_TRANSPORT", 00:28:47.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "$NVMF_PORT", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.633 "hdgst": ${hdgst:-false}, 00:28:47.633 "ddgst": ${ddgst:-false} 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 } 00:28:47.633 EOF 00:28:47.633 )") 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:47.633 { 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme$subsystem", 00:28:47.633 "trtype": "$TEST_TRANSPORT", 00:28:47.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "$NVMF_PORT", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:47.633 "hdgst": ${hdgst:-false}, 00:28:47.633 "ddgst": ${ddgst:-false} 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 } 00:28:47.633 EOF 00:28:47.633 )") 00:28:47.633 01:39:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:47.633 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:28:47.633 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:28:47.633 01:39:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme1", 00:28:47.633 "trtype": "tcp", 00:28:47.633 "traddr": "10.0.0.2", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "4420", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:47.633 "hdgst": false, 00:28:47.633 "ddgst": false 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 },{ 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme2", 00:28:47.633 "trtype": "tcp", 00:28:47.633 "traddr": "10.0.0.2", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "4420", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:47.633 "hdgst": false, 00:28:47.633 "ddgst": false 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 },{ 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme3", 00:28:47.633 "trtype": "tcp", 00:28:47.633 "traddr": "10.0.0.2", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "4420", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:47.633 "hdgst": false, 00:28:47.633 "ddgst": false 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 },{ 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme4", 00:28:47.633 "trtype": "tcp", 00:28:47.633 "traddr": "10.0.0.2", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "4420", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:47.633 "hdgst": false, 00:28:47.633 "ddgst": false 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 },{ 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme5", 00:28:47.633 "trtype": "tcp", 00:28:47.633 "traddr": "10.0.0.2", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "4420", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:47.633 "hdgst": false, 00:28:47.633 "ddgst": false 00:28:47.633 }, 00:28:47.633 "method": "bdev_nvme_attach_controller" 00:28:47.633 },{ 00:28:47.633 "params": { 00:28:47.633 "name": "Nvme6", 00:28:47.633 "trtype": "tcp", 00:28:47.633 "traddr": "10.0.0.2", 00:28:47.633 "adrfam": "ipv4", 00:28:47.633 "trsvcid": "4420", 00:28:47.633 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:47.633 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:47.633 "hdgst": false, 00:28:47.633 "ddgst": false 00:28:47.633 }, 00:28:47.634 "method": "bdev_nvme_attach_controller" 00:28:47.634 },{ 00:28:47.634 "params": { 00:28:47.634 "name": "Nvme7", 00:28:47.634 "trtype": "tcp", 00:28:47.634 "traddr": "10.0.0.2", 00:28:47.634 "adrfam": "ipv4", 00:28:47.634 "trsvcid": "4420", 00:28:47.634 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:47.634 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:47.634 "hdgst": false, 00:28:47.634 "ddgst": false 00:28:47.634 }, 00:28:47.634 "method": "bdev_nvme_attach_controller" 00:28:47.634 },{ 00:28:47.634 "params": { 00:28:47.634 "name": "Nvme8", 00:28:47.634 "trtype": "tcp", 00:28:47.634 "traddr": "10.0.0.2", 00:28:47.634 "adrfam": "ipv4", 00:28:47.634 "trsvcid": "4420", 00:28:47.634 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:47.634 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:47.634 "hdgst": false, 00:28:47.634 "ddgst": false 00:28:47.634 }, 00:28:47.634 "method": "bdev_nvme_attach_controller" 00:28:47.634 },{ 00:28:47.634 "params": { 00:28:47.634 "name": "Nvme9", 00:28:47.634 "trtype": "tcp", 00:28:47.634 "traddr": "10.0.0.2", 00:28:47.634 "adrfam": "ipv4", 00:28:47.634 "trsvcid": "4420", 00:28:47.634 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:47.634 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:47.634 "hdgst": false, 00:28:47.634 "ddgst": false 00:28:47.634 }, 00:28:47.634 "method": "bdev_nvme_attach_controller" 00:28:47.634 },{ 00:28:47.634 "params": { 00:28:47.634 "name": "Nvme10", 00:28:47.634 "trtype": "tcp", 00:28:47.634 "traddr": "10.0.0.2", 00:28:47.634 "adrfam": "ipv4", 00:28:47.634 "trsvcid": "4420", 00:28:47.634 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:47.634 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:47.634 "hdgst": false, 00:28:47.634 "ddgst": false 00:28:47.634 }, 00:28:47.634 "method": "bdev_nvme_attach_controller" 00:28:47.634 }' 00:28:47.634 [2024-10-13 01:39:33.014763] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:28:47.634 [2024-10-13 01:39:33.014893] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:47.634 [2024-10-13 01:39:33.078507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.634 [2024-10-13 01:39:33.128088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.528 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.528 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:28:49.528 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:49.528 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.528 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:49.528 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.528 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1686890 00:28:49.528 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:49.528 01:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:50.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1686890 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1686710 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.901 { 00:28:50.901 "params": { 00:28:50.901 "name": "Nvme$subsystem", 00:28:50.901 "trtype": "$TEST_TRANSPORT", 00:28:50.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.901 "adrfam": "ipv4", 00:28:50.901 "trsvcid": "$NVMF_PORT", 00:28:50.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.901 "hdgst": ${hdgst:-false}, 00:28:50.901 "ddgst": ${ddgst:-false} 00:28:50.901 }, 00:28:50.901 "method": "bdev_nvme_attach_controller" 00:28:50.901 } 00:28:50.901 EOF 00:28:50.901 )") 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.901 { 00:28:50.901 "params": { 00:28:50.901 "name": "Nvme$subsystem", 00:28:50.901 "trtype": "$TEST_TRANSPORT", 00:28:50.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.901 "adrfam": "ipv4", 00:28:50.901 "trsvcid": "$NVMF_PORT", 00:28:50.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.901 "hdgst": ${hdgst:-false}, 00:28:50.901 "ddgst": ${ddgst:-false} 00:28:50.901 }, 00:28:50.901 "method": "bdev_nvme_attach_controller" 00:28:50.901 } 00:28:50.901 EOF 00:28:50.901 )") 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.901 { 00:28:50.901 "params": { 00:28:50.901 "name": "Nvme$subsystem", 00:28:50.901 "trtype": "$TEST_TRANSPORT", 00:28:50.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.901 "adrfam": "ipv4", 00:28:50.901 "trsvcid": "$NVMF_PORT", 00:28:50.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.901 "hdgst": ${hdgst:-false}, 00:28:50.901 "ddgst": ${ddgst:-false} 00:28:50.901 }, 00:28:50.901 "method": "bdev_nvme_attach_controller" 00:28:50.901 } 00:28:50.901 EOF 00:28:50.901 )") 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.901 { 00:28:50.901 "params": { 00:28:50.901 "name": "Nvme$subsystem", 00:28:50.901 "trtype": "$TEST_TRANSPORT", 00:28:50.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.901 "adrfam": "ipv4", 00:28:50.901 "trsvcid": "$NVMF_PORT", 00:28:50.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.901 "hdgst": ${hdgst:-false}, 00:28:50.901 "ddgst": ${ddgst:-false} 00:28:50.901 }, 00:28:50.901 "method": "bdev_nvme_attach_controller" 00:28:50.901 } 00:28:50.901 EOF 00:28:50.901 )") 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.901 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.902 { 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme$subsystem", 00:28:50.902 "trtype": "$TEST_TRANSPORT", 00:28:50.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "$NVMF_PORT", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.902 "hdgst": ${hdgst:-false}, 00:28:50.902 "ddgst": ${ddgst:-false} 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 } 00:28:50.902 EOF 00:28:50.902 )") 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.902 { 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme$subsystem", 00:28:50.902 "trtype": "$TEST_TRANSPORT", 00:28:50.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "$NVMF_PORT", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.902 "hdgst": ${hdgst:-false}, 00:28:50.902 "ddgst": ${ddgst:-false} 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 } 00:28:50.902 EOF 00:28:50.902 )") 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.902 { 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme$subsystem", 00:28:50.902 "trtype": "$TEST_TRANSPORT", 00:28:50.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "$NVMF_PORT", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.902 "hdgst": ${hdgst:-false}, 00:28:50.902 "ddgst": ${ddgst:-false} 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 } 00:28:50.902 EOF 00:28:50.902 )") 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.902 { 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme$subsystem", 00:28:50.902 "trtype": "$TEST_TRANSPORT", 00:28:50.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "$NVMF_PORT", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.902 "hdgst": ${hdgst:-false}, 00:28:50.902 "ddgst": ${ddgst:-false} 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 } 00:28:50.902 EOF 00:28:50.902 )") 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.902 { 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme$subsystem", 00:28:50.902 "trtype": "$TEST_TRANSPORT", 00:28:50.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "$NVMF_PORT", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.902 "hdgst": ${hdgst:-false}, 00:28:50.902 "ddgst": ${ddgst:-false} 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 } 00:28:50.902 EOF 00:28:50.902 )") 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:50.902 { 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme$subsystem", 00:28:50.902 "trtype": "$TEST_TRANSPORT", 00:28:50.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "$NVMF_PORT", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.902 "hdgst": ${hdgst:-false}, 00:28:50.902 "ddgst": ${ddgst:-false} 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 } 00:28:50.902 EOF 00:28:50.902 )") 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:28:50.902 01:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme1", 00:28:50.902 "trtype": "tcp", 00:28:50.902 "traddr": "10.0.0.2", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "4420", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.902 "hdgst": false, 00:28:50.902 "ddgst": false 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 },{ 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme2", 00:28:50.902 "trtype": "tcp", 00:28:50.902 "traddr": "10.0.0.2", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "4420", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:50.902 "hdgst": false, 00:28:50.902 "ddgst": false 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 },{ 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme3", 00:28:50.902 "trtype": "tcp", 00:28:50.902 "traddr": "10.0.0.2", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "4420", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:50.902 "hdgst": false, 00:28:50.902 "ddgst": false 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 },{ 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme4", 00:28:50.902 "trtype": "tcp", 00:28:50.902 "traddr": "10.0.0.2", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "4420", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:50.902 "hdgst": false, 00:28:50.902 "ddgst": false 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 },{ 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme5", 00:28:50.902 "trtype": "tcp", 00:28:50.902 "traddr": "10.0.0.2", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "4420", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:50.902 "hdgst": false, 00:28:50.902 "ddgst": false 00:28:50.902 }, 00:28:50.902 "method": "bdev_nvme_attach_controller" 00:28:50.902 },{ 00:28:50.902 "params": { 00:28:50.902 "name": "Nvme6", 00:28:50.902 "trtype": "tcp", 00:28:50.902 "traddr": "10.0.0.2", 00:28:50.902 "adrfam": "ipv4", 00:28:50.902 "trsvcid": "4420", 00:28:50.902 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:50.902 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:50.902 "hdgst": false, 00:28:50.903 "ddgst": false 00:28:50.903 }, 00:28:50.903 "method": "bdev_nvme_attach_controller" 00:28:50.903 },{ 00:28:50.903 "params": { 00:28:50.903 "name": "Nvme7", 00:28:50.903 "trtype": "tcp", 00:28:50.903 "traddr": "10.0.0.2", 00:28:50.903 "adrfam": "ipv4", 00:28:50.903 "trsvcid": "4420", 00:28:50.903 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:50.903 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:50.903 "hdgst": false, 00:28:50.903 "ddgst": false 00:28:50.903 }, 00:28:50.903 "method": "bdev_nvme_attach_controller" 00:28:50.903 },{ 00:28:50.903 "params": { 00:28:50.903 "name": "Nvme8", 00:28:50.903 "trtype": "tcp", 00:28:50.903 "traddr": "10.0.0.2", 00:28:50.903 "adrfam": "ipv4", 00:28:50.903 "trsvcid": "4420", 00:28:50.903 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:50.903 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:50.903 "hdgst": false, 00:28:50.903 "ddgst": false 00:28:50.903 }, 00:28:50.903 "method": "bdev_nvme_attach_controller" 00:28:50.903 },{ 00:28:50.903 "params": { 00:28:50.903 "name": "Nvme9", 00:28:50.903 "trtype": "tcp", 00:28:50.903 "traddr": "10.0.0.2", 00:28:50.903 "adrfam": "ipv4", 00:28:50.903 "trsvcid": "4420", 00:28:50.903 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:50.903 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:50.903 "hdgst": false, 00:28:50.903 "ddgst": false 00:28:50.903 }, 00:28:50.903 "method": "bdev_nvme_attach_controller" 00:28:50.903 },{ 00:28:50.903 "params": { 00:28:50.903 "name": "Nvme10", 00:28:50.903 "trtype": "tcp", 00:28:50.903 "traddr": "10.0.0.2", 00:28:50.903 "adrfam": "ipv4", 00:28:50.903 "trsvcid": "4420", 00:28:50.903 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:50.903 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:50.903 "hdgst": false, 00:28:50.903 "ddgst": false 00:28:50.903 }, 00:28:50.903 "method": "bdev_nvme_attach_controller" 00:28:50.903 }' 00:28:50.903 [2024-10-13 01:39:36.090044] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:28:50.903 [2024-10-13 01:39:36.090119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687305 ] 00:28:50.903 [2024-10-13 01:39:36.152892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.903 [2024-10-13 01:39:36.199937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.275 Running I/O for 1 seconds... 00:28:53.207 1728.00 IOPS, 108.00 MiB/s 00:28:53.207 Latency(us) 00:28:53.207 [2024-10-12T23:39:38.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.207 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme1n1 : 1.11 230.96 14.44 0.00 0.00 274112.28 17961.72 273406.48 00:28:53.207 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme2n1 : 1.02 187.94 11.75 0.00 0.00 330809.39 21554.06 274959.93 00:28:53.207 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme3n1 : 1.11 230.24 14.39 0.00 0.00 266016.62 20097.71 250104.79 00:28:53.207 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme4n1 : 1.12 228.75 14.30 0.00 0.00 262999.42 19903.53 234570.33 00:28:53.207 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme5n1 : 1.14 225.24 14.08 0.00 0.00 262987.09 21262.79 259425.47 00:28:53.207 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme6n1 : 1.13 226.77 14.17 0.00 0.00 256649.86 21942.42 265639.25 00:28:53.207 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme7n1 : 1.12 227.98 14.25 0.00 0.00 250607.31 34564.17 245444.46 00:28:53.207 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme8n1 : 1.17 274.32 17.15 0.00 0.00 204845.66 19223.89 251658.24 00:28:53.207 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme9n1 : 1.17 222.44 13.90 0.00 0.00 248873.32 801.00 290494.39 00:28:53.207 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:53.207 Verification LBA range: start 0x0 length 0x400 00:28:53.207 Nvme10n1 : 1.18 270.32 16.90 0.00 0.00 202015.74 5995.33 268746.15 00:28:53.207 [2024-10-12T23:39:38.785Z] =================================================================================================================== 00:28:53.207 [2024-10-12T23:39:38.785Z] Total : 2324.97 145.31 0.00 0.00 251597.71 801.00 290494.39 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.465 rmmod nvme_tcp 00:28:53.465 rmmod nvme_fabrics 00:28:53.465 rmmod nvme_keyring 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1686710 ']' 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1686710 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1686710 ']' 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1686710 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686710 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686710' 00:28:53.465 killing process with pid 1686710 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1686710 00:28:53.465 01:39:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1686710 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.031 01:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.933 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.933 00:28:55.933 real 0m11.676s 00:28:55.933 user 0m33.432s 00:28:55.933 sys 0m3.208s 00:28:55.934 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:55.934 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:55.934 ************************************ 00:28:55.934 END TEST nvmf_shutdown_tc1 00:28:55.934 ************************************ 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:56.192 ************************************ 00:28:56.192 START TEST nvmf_shutdown_tc2 00:28:56.192 ************************************ 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.192 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:56.193 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:56.193 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:56.193 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:56.193 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:28:56.193 00:28:56.193 --- 10.0.0.2 ping statistics --- 00:28:56.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.193 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:28:56.193 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:28:56.194 00:28:56.194 --- 10.0.0.1 ping statistics --- 00:28:56.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.194 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1688049 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1688049 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1688049 ']' 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:56.194 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.451 [2024-10-13 01:39:41.810989] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:28:56.451 [2024-10-13 01:39:41.811073] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.452 [2024-10-13 01:39:41.881908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.452 [2024-10-13 01:39:41.932917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.452 [2024-10-13 01:39:41.932975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.452 [2024-10-13 01:39:41.932999] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.452 [2024-10-13 01:39:41.933013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.452 [2024-10-13 01:39:41.933024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.452 [2024-10-13 01:39:41.934663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.452 [2024-10-13 01:39:41.934689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.452 [2024-10-13 01:39:41.934761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:56.452 [2024-10-13 01:39:41.934765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.709 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.709 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:56.709 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:56.709 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:56.709 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.709 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.709 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.709 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.710 [2024-10-13 01:39:42.097418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.710 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:56.710 Malloc1 00:28:56.710 [2024-10-13 01:39:42.187575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.710 Malloc2 00:28:56.710 Malloc3 00:28:56.968 Malloc4 00:28:56.968 Malloc5 00:28:56.968 Malloc6 00:28:56.968 Malloc7 00:28:56.968 Malloc8 00:28:57.226 Malloc9 00:28:57.226 Malloc10 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1688132 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1688132 /var/tmp/bdevperf.sock 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1688132 ']' 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:57.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.226 { 00:28:57.226 "params": { 00:28:57.226 "name": "Nvme$subsystem", 00:28:57.226 "trtype": "$TEST_TRANSPORT", 00:28:57.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.226 "adrfam": "ipv4", 00:28:57.226 "trsvcid": "$NVMF_PORT", 00:28:57.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.226 "hdgst": ${hdgst:-false}, 00:28:57.226 "ddgst": ${ddgst:-false} 00:28:57.226 }, 00:28:57.226 "method": "bdev_nvme_attach_controller" 00:28:57.226 } 00:28:57.226 EOF 00:28:57.226 )") 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.226 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.227 { 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme$subsystem", 00:28:57.227 "trtype": "$TEST_TRANSPORT", 00:28:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "$NVMF_PORT", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.227 "hdgst": ${hdgst:-false}, 00:28:57.227 "ddgst": ${ddgst:-false} 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 } 00:28:57.227 EOF 00:28:57.227 )") 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.227 { 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme$subsystem", 00:28:57.227 "trtype": "$TEST_TRANSPORT", 00:28:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "$NVMF_PORT", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.227 "hdgst": ${hdgst:-false}, 00:28:57.227 "ddgst": ${ddgst:-false} 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 } 00:28:57.227 EOF 00:28:57.227 )") 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.227 { 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme$subsystem", 00:28:57.227 "trtype": "$TEST_TRANSPORT", 00:28:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "$NVMF_PORT", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.227 "hdgst": ${hdgst:-false}, 00:28:57.227 "ddgst": ${ddgst:-false} 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 } 00:28:57.227 EOF 00:28:57.227 )") 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.227 { 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme$subsystem", 00:28:57.227 "trtype": "$TEST_TRANSPORT", 00:28:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "$NVMF_PORT", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.227 "hdgst": ${hdgst:-false}, 00:28:57.227 "ddgst": ${ddgst:-false} 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 } 00:28:57.227 EOF 00:28:57.227 )") 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.227 { 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme$subsystem", 00:28:57.227 "trtype": "$TEST_TRANSPORT", 00:28:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "$NVMF_PORT", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.227 "hdgst": ${hdgst:-false}, 00:28:57.227 "ddgst": ${ddgst:-false} 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 } 00:28:57.227 EOF 00:28:57.227 )") 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.227 { 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme$subsystem", 00:28:57.227 "trtype": "$TEST_TRANSPORT", 00:28:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "$NVMF_PORT", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.227 "hdgst": ${hdgst:-false}, 00:28:57.227 "ddgst": ${ddgst:-false} 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 } 00:28:57.227 EOF 00:28:57.227 )") 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.227 { 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme$subsystem", 00:28:57.227 "trtype": "$TEST_TRANSPORT", 00:28:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "$NVMF_PORT", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.227 "hdgst": ${hdgst:-false}, 00:28:57.227 "ddgst": ${ddgst:-false} 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 } 00:28:57.227 EOF 00:28:57.227 )") 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.227 { 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme$subsystem", 00:28:57.227 "trtype": "$TEST_TRANSPORT", 00:28:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "$NVMF_PORT", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.227 "hdgst": ${hdgst:-false}, 00:28:57.227 "ddgst": ${ddgst:-false} 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 } 00:28:57.227 EOF 00:28:57.227 )") 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:57.227 { 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme$subsystem", 00:28:57.227 "trtype": "$TEST_TRANSPORT", 00:28:57.227 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "$NVMF_PORT", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.227 "hdgst": ${hdgst:-false}, 00:28:57.227 "ddgst": ${ddgst:-false} 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 } 00:28:57.227 EOF 00:28:57.227 )") 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:28:57.227 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme1", 00:28:57.227 "trtype": "tcp", 00:28:57.227 "traddr": "10.0.0.2", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "4420", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:57.227 "hdgst": false, 00:28:57.227 "ddgst": false 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 },{ 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme2", 00:28:57.227 "trtype": "tcp", 00:28:57.227 "traddr": "10.0.0.2", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "4420", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:57.227 "hdgst": false, 00:28:57.227 "ddgst": false 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 },{ 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme3", 00:28:57.227 "trtype": "tcp", 00:28:57.227 "traddr": "10.0.0.2", 00:28:57.227 "adrfam": "ipv4", 00:28:57.227 "trsvcid": "4420", 00:28:57.227 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:57.227 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:57.227 "hdgst": false, 00:28:57.227 "ddgst": false 00:28:57.227 }, 00:28:57.227 "method": "bdev_nvme_attach_controller" 00:28:57.227 },{ 00:28:57.227 "params": { 00:28:57.227 "name": "Nvme4", 00:28:57.227 "trtype": "tcp", 00:28:57.227 "traddr": "10.0.0.2", 00:28:57.227 "adrfam": "ipv4", 00:28:57.228 "trsvcid": "4420", 00:28:57.228 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:57.228 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:57.228 "hdgst": false, 00:28:57.228 "ddgst": false 00:28:57.228 }, 00:28:57.228 "method": "bdev_nvme_attach_controller" 00:28:57.228 },{ 00:28:57.228 "params": { 00:28:57.228 "name": "Nvme5", 00:28:57.228 "trtype": "tcp", 00:28:57.228 "traddr": "10.0.0.2", 00:28:57.228 "adrfam": "ipv4", 00:28:57.228 "trsvcid": "4420", 00:28:57.228 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:57.228 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:57.228 "hdgst": false, 00:28:57.228 "ddgst": false 00:28:57.228 }, 00:28:57.228 "method": "bdev_nvme_attach_controller" 00:28:57.228 },{ 00:28:57.228 "params": { 00:28:57.228 "name": "Nvme6", 00:28:57.228 "trtype": "tcp", 00:28:57.228 "traddr": "10.0.0.2", 00:28:57.228 "adrfam": "ipv4", 00:28:57.228 "trsvcid": "4420", 00:28:57.228 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:57.228 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:57.228 "hdgst": false, 00:28:57.228 "ddgst": false 00:28:57.228 }, 00:28:57.228 "method": "bdev_nvme_attach_controller" 00:28:57.228 },{ 00:28:57.228 "params": { 00:28:57.228 "name": "Nvme7", 00:28:57.228 "trtype": "tcp", 00:28:57.228 "traddr": "10.0.0.2", 00:28:57.228 "adrfam": "ipv4", 00:28:57.228 "trsvcid": "4420", 00:28:57.228 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:57.228 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:57.228 "hdgst": false, 00:28:57.228 "ddgst": false 00:28:57.228 }, 00:28:57.228 "method": "bdev_nvme_attach_controller" 00:28:57.228 },{ 00:28:57.228 "params": { 00:28:57.228 "name": "Nvme8", 00:28:57.228 "trtype": "tcp", 00:28:57.228 "traddr": "10.0.0.2", 00:28:57.228 "adrfam": "ipv4", 00:28:57.228 "trsvcid": "4420", 00:28:57.228 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:57.228 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:57.228 "hdgst": false, 00:28:57.228 "ddgst": false 00:28:57.228 }, 00:28:57.228 "method": "bdev_nvme_attach_controller" 00:28:57.228 },{ 00:28:57.228 "params": { 00:28:57.228 "name": "Nvme9", 00:28:57.228 "trtype": "tcp", 00:28:57.228 "traddr": "10.0.0.2", 00:28:57.228 "adrfam": "ipv4", 00:28:57.228 "trsvcid": "4420", 00:28:57.228 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:57.228 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:57.228 "hdgst": false, 00:28:57.228 "ddgst": false 00:28:57.228 }, 00:28:57.228 "method": "bdev_nvme_attach_controller" 00:28:57.228 },{ 00:28:57.228 "params": { 00:28:57.228 "name": "Nvme10", 00:28:57.228 "trtype": "tcp", 00:28:57.228 "traddr": "10.0.0.2", 00:28:57.228 "adrfam": "ipv4", 00:28:57.228 "trsvcid": "4420", 00:28:57.228 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:57.228 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:57.228 "hdgst": false, 00:28:57.228 "ddgst": false 00:28:57.228 }, 00:28:57.228 "method": "bdev_nvme_attach_controller" 00:28:57.228 }' 00:28:57.228 [2024-10-13 01:39:42.690101] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:28:57.228 [2024-10-13 01:39:42.690192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688132 ] 00:28:57.228 [2024-10-13 01:39:42.753382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.228 [2024-10-13 01:39:42.800860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.126 Running I/O for 10 seconds... 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=12 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 12 -ge 100 ']' 00:28:59.384 01:39:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1688132 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1688132 ']' 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1688132 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688132 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688132' 00:28:59.642 killing process with pid 1688132 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1688132 00:28:59.642 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1688132 00:28:59.642 Received shutdown signal, test time was about 0.769050 seconds 00:28:59.642 00:28:59.642 Latency(us) 00:28:59.642 [2024-10-12T23:39:45.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.642 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme1n1 : 0.75 255.91 15.99 0.00 0.00 246491.21 19515.16 246997.90 00:28:59.642 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme2n1 : 0.77 249.94 15.62 0.00 0.00 246212.33 20874.43 262532.36 00:28:59.642 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme3n1 : 0.76 253.40 15.84 0.00 0.00 236750.32 36505.98 262532.36 00:28:59.642 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme4n1 : 0.74 259.97 16.25 0.00 0.00 223965.49 18252.99 248551.35 00:28:59.642 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme5n1 : 0.77 250.91 15.68 0.00 0.00 226888.82 21456.97 242337.56 00:28:59.642 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme6n1 : 0.74 258.12 16.13 0.00 0.00 213123.73 20000.62 233016.89 00:28:59.642 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme7n1 : 0.72 177.53 11.10 0.00 0.00 300255.00 22524.97 256318.58 00:28:59.642 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme8n1 : 0.76 259.17 16.20 0.00 0.00 200486.87 4587.52 248551.35 00:28:59.642 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme9n1 : 0.73 175.43 10.96 0.00 0.00 286905.84 22233.69 281173.71 00:28:59.642 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.642 Verification LBA range: start 0x0 length 0x400 00:28:59.642 Nvme10n1 : 0.72 176.55 11.03 0.00 0.00 275950.55 42719.76 262532.36 00:28:59.642 [2024-10-12T23:39:45.220Z] =================================================================================================================== 00:28:59.642 [2024-10-12T23:39:45.220Z] Total : 2316.93 144.81 0.00 0.00 240919.27 4587.52 281173.71 00:28:59.900 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1688049 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:01.272 rmmod nvme_tcp 00:29:01.272 rmmod nvme_fabrics 00:29:01.272 rmmod nvme_keyring 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1688049 ']' 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1688049 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1688049 ']' 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1688049 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688049 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688049' 00:29:01.272 killing process with pid 1688049 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1688049 00:29:01.272 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1688049 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.531 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.064 00:29:04.064 real 0m7.498s 00:29:04.064 user 0m22.562s 00:29:04.064 sys 0m1.410s 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.064 ************************************ 00:29:04.064 END TEST nvmf_shutdown_tc2 00:29:04.064 ************************************ 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:04.064 ************************************ 00:29:04.064 START TEST nvmf_shutdown_tc3 00:29:04.064 ************************************ 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:04.064 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:04.065 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:29:04.066 00:29:04.066 --- 10.0.0.2 ping statistics --- 00:29:04.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.066 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:29:04.066 00:29:04.066 --- 10.0.0.1 ping statistics --- 00:29:04.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.066 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1689040 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1689040 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1689040 ']' 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.066 [2024-10-13 01:39:49.316520] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:04.066 [2024-10-13 01:39:49.316612] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.066 [2024-10-13 01:39:49.384360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:04.066 [2024-10-13 01:39:49.435157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.066 [2024-10-13 01:39:49.435212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.066 [2024-10-13 01:39:49.435237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.066 [2024-10-13 01:39:49.435250] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.066 [2024-10-13 01:39:49.435261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.066 [2024-10-13 01:39:49.436965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:04.066 [2024-10-13 01:39:49.437072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:04.066 [2024-10-13 01:39:49.437128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:04.066 [2024-10-13 01:39:49.437130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:04.066 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.067 [2024-10-13 01:39:49.570994] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.067 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.067 Malloc1 00:29:04.325 [2024-10-13 01:39:49.654975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.325 Malloc2 00:29:04.325 Malloc3 00:29:04.325 Malloc4 00:29:04.325 Malloc5 00:29:04.325 Malloc6 00:29:04.583 Malloc7 00:29:04.583 Malloc8 00:29:04.583 Malloc9 00:29:04.583 Malloc10 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1689215 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1689215 /var/tmp/bdevperf.sock 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1689215 ']' 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:04.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:04.583 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.583 { 00:29:04.583 "params": { 00:29:04.583 "name": "Nvme$subsystem", 00:29:04.583 "trtype": "$TEST_TRANSPORT", 00:29:04.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.583 "adrfam": "ipv4", 00:29:04.583 "trsvcid": "$NVMF_PORT", 00:29:04.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:04.584 { 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme$subsystem", 00:29:04.584 "trtype": "$TEST_TRANSPORT", 00:29:04.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "$NVMF_PORT", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:04.584 "hdgst": ${hdgst:-false}, 00:29:04.584 "ddgst": ${ddgst:-false} 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 } 00:29:04.584 EOF 00:29:04.584 )") 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:29:04.584 01:39:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme1", 00:29:04.584 "trtype": "tcp", 00:29:04.584 "traddr": "10.0.0.2", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "4420", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:04.584 "hdgst": false, 00:29:04.584 "ddgst": false 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 },{ 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme2", 00:29:04.584 "trtype": "tcp", 00:29:04.584 "traddr": "10.0.0.2", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "4420", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:04.584 "hdgst": false, 00:29:04.584 "ddgst": false 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 },{ 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme3", 00:29:04.584 "trtype": "tcp", 00:29:04.584 "traddr": "10.0.0.2", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "4420", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:04.584 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:04.584 "hdgst": false, 00:29:04.584 "ddgst": false 00:29:04.584 }, 00:29:04.584 "method": "bdev_nvme_attach_controller" 00:29:04.584 },{ 00:29:04.584 "params": { 00:29:04.584 "name": "Nvme4", 00:29:04.584 "trtype": "tcp", 00:29:04.584 "traddr": "10.0.0.2", 00:29:04.584 "adrfam": "ipv4", 00:29:04.584 "trsvcid": "4420", 00:29:04.584 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:04.585 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:04.585 "hdgst": false, 00:29:04.585 "ddgst": false 00:29:04.585 }, 00:29:04.585 "method": "bdev_nvme_attach_controller" 00:29:04.585 },{ 00:29:04.585 "params": { 00:29:04.585 "name": "Nvme5", 00:29:04.585 "trtype": "tcp", 00:29:04.585 "traddr": "10.0.0.2", 00:29:04.585 "adrfam": "ipv4", 00:29:04.585 "trsvcid": "4420", 00:29:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:04.585 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:04.585 "hdgst": false, 00:29:04.585 "ddgst": false 00:29:04.585 }, 00:29:04.585 "method": "bdev_nvme_attach_controller" 00:29:04.585 },{ 00:29:04.585 "params": { 00:29:04.585 "name": "Nvme6", 00:29:04.585 "trtype": "tcp", 00:29:04.585 "traddr": "10.0.0.2", 00:29:04.585 "adrfam": "ipv4", 00:29:04.585 "trsvcid": "4420", 00:29:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:04.585 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:04.585 "hdgst": false, 00:29:04.585 "ddgst": false 00:29:04.585 }, 00:29:04.585 "method": "bdev_nvme_attach_controller" 00:29:04.585 },{ 00:29:04.585 "params": { 00:29:04.585 "name": "Nvme7", 00:29:04.585 "trtype": "tcp", 00:29:04.585 "traddr": "10.0.0.2", 00:29:04.585 "adrfam": "ipv4", 00:29:04.585 "trsvcid": "4420", 00:29:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:04.585 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:04.585 "hdgst": false, 00:29:04.585 "ddgst": false 00:29:04.585 }, 00:29:04.585 "method": "bdev_nvme_attach_controller" 00:29:04.585 },{ 00:29:04.585 "params": { 00:29:04.585 "name": "Nvme8", 00:29:04.585 "trtype": "tcp", 00:29:04.585 "traddr": "10.0.0.2", 00:29:04.585 "adrfam": "ipv4", 00:29:04.585 "trsvcid": "4420", 00:29:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:04.585 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:04.585 "hdgst": false, 00:29:04.585 "ddgst": false 00:29:04.585 }, 00:29:04.585 "method": "bdev_nvme_attach_controller" 00:29:04.585 },{ 00:29:04.585 "params": { 00:29:04.585 "name": "Nvme9", 00:29:04.585 "trtype": "tcp", 00:29:04.585 "traddr": "10.0.0.2", 00:29:04.585 "adrfam": "ipv4", 00:29:04.585 "trsvcid": "4420", 00:29:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:04.585 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:04.585 "hdgst": false, 00:29:04.585 "ddgst": false 00:29:04.585 }, 00:29:04.585 "method": "bdev_nvme_attach_controller" 00:29:04.585 },{ 00:29:04.585 "params": { 00:29:04.585 "name": "Nvme10", 00:29:04.585 "trtype": "tcp", 00:29:04.585 "traddr": "10.0.0.2", 00:29:04.585 "adrfam": "ipv4", 00:29:04.585 "trsvcid": "4420", 00:29:04.585 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:04.585 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:04.585 "hdgst": false, 00:29:04.585 "ddgst": false 00:29:04.585 }, 00:29:04.585 "method": "bdev_nvme_attach_controller" 00:29:04.585 }' 00:29:04.585 [2024-10-13 01:39:50.150699] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:04.585 [2024-10-13 01:39:50.150796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1689215 ] 00:29:04.842 [2024-10-13 01:39:50.215561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.842 [2024-10-13 01:39:50.263234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.213 Running I/O for 10 seconds... 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:06.778 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1689040 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1689040 ']' 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1689040 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1689040 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1689040' 00:29:07.044 killing process with pid 1689040 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1689040 00:29:07.044 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1689040 00:29:07.044 [2024-10-13 01:39:52.576960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.044 [2024-10-13 01:39:52.577460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.577957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23448d0 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.579991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.045 [2024-10-13 01:39:52.580153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.580317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2347460 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.581994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.046 [2024-10-13 01:39:52.582576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.582589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344da0 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.584996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.585196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345270 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.047 [2024-10-13 01:39:52.586767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.586996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.587265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345760 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.048 [2024-10-13 01:39:52.588551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.588783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2345c30 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.589863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.589888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.589903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.589917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.589938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.589952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.589965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.589978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.589992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.590690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346120 is same with the state(6) to be set 00:29:07.049 [2024-10-13 01:39:52.591963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.049 [2024-10-13 01:39:52.592010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.049 [2024-10-13 01:39:52.592028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.049 [2024-10-13 01:39:52.592042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.049 [2024-10-13 01:39:52.592058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.049 [2024-10-13 01:39:52.592072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.049 [2024-10-13 01:39:52.592086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592112] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59afb0 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.592169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ebf50 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.592383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0fe50 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.592588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59bfa0 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.592751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd570 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.592927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.592983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.592997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.593011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.593025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.593039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.593051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a2510 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.593096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.593119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.593134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.593148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.593162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.593177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.593191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.050 [2024-10-13 01:39:52.593207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.593221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5aa140 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.599353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.599715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with [2024-10-13 01:39:52.599731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1the state(6) to be set 00:29:07.050 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with [2024-10-13 01:39:52.599748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:07.050 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.050 [2024-10-13 01:39:52.599761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.599765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.050 [2024-10-13 01:39:52.599774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.050 [2024-10-13 01:39:52.599789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.599797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.599810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.599822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.599841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.599854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with [2024-10-13 01:39:52.599867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1the state(6) to be set 00:29:07.051 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.599880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.599892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.599904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.599917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with [2024-10-13 01:39:52.599930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1the state(6) to be set 00:29:07.051 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.599943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.599955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.599968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.599980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.599992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with [2024-10-13 01:39:52.599992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1the state(6) to be set 00:29:07.051 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-13 01:39:52.600082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with [2024-10-13 01:39:52.600282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(6) to be set 00:29:07.051 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-10-13 01:39:52.600345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with [2024-10-13 01:39:52.600359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:07.051 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 [2024-10-13 01:39:52.600387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.051 [2024-10-13 01:39:52.600401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.051 [2024-10-13 01:39:52.600411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-10-13 01:39:52.600414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.051 the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-13 01:39:52.600428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346ac0 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.600571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.600977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.600992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.052 [2024-10-13 01:39:52.601404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.052 [2024-10-13 01:39:52.601416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with [2024-10-13 01:39:52.601416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:12the state(6) to be set 00:29:07.052 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.052 [2024-10-13 01:39:52.601430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.053 [2024-10-13 01:39:52.601442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with [2024-10-13 01:39:52.601561] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9ace50 was disconnected and frethe state(6) to be set 00:29:07.053 ed. reset controller. 00:29:07.053 [2024-10-13 01:39:52.601581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601770] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:07.053 [2024-10-13 01:39:52.601807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601894] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:07.053 [2024-10-13 01:39:52.601940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601967] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:07.053 [2024-10-13 01:39:52.601974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.601988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602036] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:07.053 [2024-10-13 01:39:52.602050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602105] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:07.053 [2024-10-13 01:39:52.602117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602187] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:07.053 [2024-10-13 01:39:52.602205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602264] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:07.053 [2024-10-13 01:39:52.602277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.602290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2346f90 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.603555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:07.053 [2024-10-13 01:39:52.603626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4990 (9): Bad file descriptor 00:29:07.053 [2024-10-13 01:39:52.603658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59afb0 (9): Bad file descriptor 00:29:07.053 [2024-10-13 01:39:52.603691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ebf50 (9): Bad file descriptor 00:29:07.053 [2024-10-13 01:39:52.603743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.053 [2024-10-13 01:39:52.603764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.053 [2024-10-13 01:39:52.603787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.053 [2024-10-13 01:39:52.603800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.053 [2024-10-13 01:39:52.603814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.053 [2024-10-13 01:39:52.603827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.053 [2024-10-13 01:39:52.603854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.053 [2024-10-13 01:39:52.603868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.053 [2024-10-13 01:39:52.603881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03ff0 is same with the state(6) to be set 00:29:07.053 [2024-10-13 01:39:52.603929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.053 [2024-10-13 01:39:52.603949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.053 [2024-10-13 01:39:52.603964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.053 [2024-10-13 01:39:52.603977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.053 [2024-10-13 01:39:52.603991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.053 [2024-10-13 01:39:52.604003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:07.054 [2024-10-13 01:39:52.604030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff2d0 is same with the state(6) to be set 00:29:07.054 [2024-10-13 01:39:52.604072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0fe50 (9): Bad file descriptor 00:29:07.054 [2024-10-13 01:39:52.604113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59bfa0 (9): Bad file descriptor 00:29:07.054 [2024-10-13 01:39:52.604144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dd570 (9): Bad file descriptor 00:29:07.054 [2024-10-13 01:39:52.604174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a2510 (9): Bad file descriptor 00:29:07.054 [2024-10-13 01:39:52.604203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5aa140 (9): Bad file descriptor 00:29:07.054 [2024-10-13 01:39:52.604314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.604983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.604996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.054 [2024-10-13 01:39:52.605524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.054 [2024-10-13 01:39:52.605540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.605974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.605989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.606302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.606316] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9aa430 is same with the state(6) to be set 00:29:07.055 [2024-10-13 01:39:52.606407] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9aa430 was disconnected and freed. reset controller. 00:29:07.055 [2024-10-13 01:39:52.608228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:07.055 [2024-10-13 01:39:52.608385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-10-13 01:39:52.608414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4990 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-10-13 01:39:52.608431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4990 is same with the state(6) to be set 00:29:07.055 [2024-10-13 01:39:52.608555] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:07.055 [2024-10-13 01:39:52.608755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.055 [2024-10-13 01:39:52.608784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dd570 with addr=10.0.0.2, port=4420 00:29:07.055 [2024-10-13 01:39:52.608801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd570 is same with the state(6) to be set 00:29:07.055 [2024-10-13 01:39:52.608820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4990 (9): Bad file descriptor 00:29:07.055 [2024-10-13 01:39:52.609198] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:07.055 [2024-10-13 01:39:52.609239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dd570 (9): Bad file descriptor 00:29:07.055 [2024-10-13 01:39:52.609262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:07.055 [2024-10-13 01:39:52.609276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:07.055 [2024-10-13 01:39:52.609294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:07.055 [2024-10-13 01:39:52.609381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-10-13 01:39:52.609403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:07.055 [2024-10-13 01:39:52.609417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:07.055 [2024-10-13 01:39:52.609431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:07.055 [2024-10-13 01:39:52.609511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.055 [2024-10-13 01:39:52.613624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa03ff0 (9): Bad file descriptor 00:29:07.055 [2024-10-13 01:39:52.613670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ff2d0 (9): Bad file descriptor 00:29:07.055 [2024-10-13 01:39:52.613866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.613892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.613923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.613940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.613956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.613971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.613987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.614001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.614018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.614032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.614048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.055 [2024-10-13 01:39:52.614061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.055 [2024-10-13 01:39:52.614086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.056 [2024-10-13 01:39:52.614652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.056 [2024-10-13 01:39:52.614668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.629643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.629730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.629747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.629764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.629777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.629793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.629807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.629823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.629837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.629853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.629867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.629897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.629914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.629930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.629943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.629959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.629973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.629990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.630004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.630020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.630033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.630049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.630063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.630079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.630094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.630109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.630123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.630139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.630153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.630168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.323 [2024-10-13 01:39:52.630182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.323 [2024-10-13 01:39:52.630197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.630883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.630898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a4d80 is same with the state(6) to be set 00:29:07.324 [2024-10-13 01:39:52.632262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.324 [2024-10-13 01:39:52.632892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.324 [2024-10-13 01:39:52.632906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.632922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.632935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.632951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.632965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.632980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.632995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.633975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.633990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.634008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.634025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.634038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.634054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.634068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.634084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.634097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.634113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.634127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.634143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.634157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.325 [2024-10-13 01:39:52.634172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.325 [2024-10-13 01:39:52.634186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.634202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.634215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.634231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.634245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.634260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.634275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.634289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a5f40 is same with the state(6) to be set 00:29:07.326 [2024-10-13 01:39:52.635524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.635978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.635998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.326 [2024-10-13 01:39:52.636699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.326 [2024-10-13 01:39:52.636715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.636744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.636777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.636807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.636837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.636865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.636894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.636923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.636952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.636981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.636995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.637482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.637503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc090c0 is same with the state(6) to be set 00:29:07.327 [2024-10-13 01:39:52.638757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.638785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.638805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.638820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.638836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.638849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.638865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.638880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.638896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.638909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.638925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.638939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.638954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.638968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.638983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.638997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.639012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.639025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.327 [2024-10-13 01:39:52.639040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.327 [2024-10-13 01:39:52.639054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.639971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.639985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.328 [2024-10-13 01:39:52.640335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.328 [2024-10-13 01:39:52.640349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.640724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.640739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc16340 is same with the state(6) to be set 00:29:07.329 [2024-10-13 01:39:52.641967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.641989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.642772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.642787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.650375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.650451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.650466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.329 [2024-10-13 01:39:52.650490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.329 [2024-10-13 01:39:52.650516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.650978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.650994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.651595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.651611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ab960 is same with the state(6) to be set 00:29:07.330 [2024-10-13 01:39:52.653019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.653045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.653068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.653083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.653099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.653113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.653129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.653142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.653159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.653172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.653188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.330 [2024-10-13 01:39:52.653202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.330 [2024-10-13 01:39:52.653217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.653981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.653995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.331 [2024-10-13 01:39:52.654441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.331 [2024-10-13 01:39:52.654460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.654964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.654979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbea860 is same with the state(6) to be set 00:29:07.332 [2024-10-13 01:39:52.656171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.332 [2024-10-13 01:39:52.656200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:07.332 [2024-10-13 01:39:52.656219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:07.332 [2024-10-13 01:39:52.656238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:07.332 [2024-10-13 01:39:52.656349] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.332 [2024-10-13 01:39:52.656386] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.332 [2024-10-13 01:39:52.672240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:07.332 [2024-10-13 01:39:52.672336] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:07.332 [2024-10-13 01:39:52.672660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.332 [2024-10-13 01:39:52.672696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5a2510 with addr=10.0.0.2, port=4420 00:29:07.332 [2024-10-13 01:39:52.672718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a2510 is same with the state(6) to be set 00:29:07.332 [2024-10-13 01:39:52.672794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.332 [2024-10-13 01:39:52.672820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5aa140 with addr=10.0.0.2, port=4420 00:29:07.332 [2024-10-13 01:39:52.672847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5aa140 is same with the state(6) to be set 00:29:07.332 [2024-10-13 01:39:52.672947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.332 [2024-10-13 01:39:52.672972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59afb0 with addr=10.0.0.2, port=4420 00:29:07.332 [2024-10-13 01:39:52.672988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59afb0 is same with the state(6) to be set 00:29:07.332 [2024-10-13 01:39:52.673076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.332 [2024-10-13 01:39:52.673102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59bfa0 with addr=10.0.0.2, port=4420 00:29:07.332 [2024-10-13 01:39:52.673142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59bfa0 is same with the state(6) to be set 00:29:07.332 [2024-10-13 01:39:52.673191] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.332 [2024-10-13 01:39:52.673217] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.332 [2024-10-13 01:39:52.673239] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.332 [2024-10-13 01:39:52.673258] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.332 [2024-10-13 01:39:52.673290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59bfa0 (9): Bad file descriptor 00:29:07.332 [2024-10-13 01:39:52.673321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59afb0 (9): Bad file descriptor 00:29:07.332 [2024-10-13 01:39:52.673345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5aa140 (9): Bad file descriptor 00:29:07.332 [2024-10-13 01:39:52.673368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a2510 (9): Bad file descriptor 00:29:07.332 [2024-10-13 01:39:52.674777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.674805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.674838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.674853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.674880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.674894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.674910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.674924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.674939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.674953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.674969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.674983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.674999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.675013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.675029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.332 [2024-10-13 01:39:52.675043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.332 [2024-10-13 01:39:52.675059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.675970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.675984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.333 [2024-10-13 01:39:52.676301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.333 [2024-10-13 01:39:52.676314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.676741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.676755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ae340 is same with the state(6) to be set 00:29:07.334 [2024-10-13 01:39:52.678020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.334 [2024-10-13 01:39:52.678863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.334 [2024-10-13 01:39:52.678877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.678893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.678907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.678922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.678936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.678952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.678966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.678982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.678999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.335 [2024-10-13 01:39:52.679970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.335 [2024-10-13 01:39:52.679985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe9350 is same with the state(6) to be set 00:29:07.335 task offset: 27008 on job bdev=Nvme7n1 fails 00:29:07.335 1635.09 IOPS, 102.19 MiB/s [2024-10-12T23:39:52.913Z] [2024-10-13 01:39:52.682603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:07.335 [2024-10-13 01:39:52.682635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:07.335 [2024-10-13 01:39:52.682657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:07.335 00:29:07.335 Latency(us) 00:29:07.335 [2024-10-12T23:39:52.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.335 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.335 Job: Nvme1n1 ended in about 0.98 seconds with error 00:29:07.335 Verification LBA range: start 0x0 length 0x400 00:29:07.335 Nvme1n1 : 0.98 130.11 8.13 65.06 0.00 324615.08 22233.69 285834.05 00:29:07.335 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.335 Job: Nvme2n1 ended in about 0.99 seconds with error 00:29:07.335 Verification LBA range: start 0x0 length 0x400 00:29:07.336 Nvme2n1 : 0.99 198.56 12.41 64.84 0.00 235930.35 14660.65 270299.59 00:29:07.336 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.336 Job: Nvme3n1 ended in about 0.99 seconds with error 00:29:07.336 Verification LBA range: start 0x0 length 0x400 00:29:07.336 Nvme3n1 : 0.99 193.88 12.12 64.63 0.00 235837.06 19418.07 256318.58 00:29:07.336 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.336 Job: Nvme4n1 ended in about 0.99 seconds with error 00:29:07.336 Verification LBA range: start 0x0 length 0x400 00:29:07.336 Nvme4n1 : 0.99 193.25 12.08 64.42 0.00 232111.22 18350.08 259425.47 00:29:07.336 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.336 Job: Nvme5n1 ended in about 0.96 seconds with error 00:29:07.336 Verification LBA range: start 0x0 length 0x400 00:29:07.336 Nvme5n1 : 0.96 143.82 8.99 66.70 0.00 277697.69 11796.48 265639.25 00:29:07.336 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.336 Job: Nvme6n1 ended in about 1.00 seconds with error 00:29:07.336 Verification LBA range: start 0x0 length 0x400 00:29:07.336 Nvme6n1 : 1.00 127.44 7.96 63.72 0.00 301069.40 20388.98 287387.50 00:29:07.336 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.336 Job: Nvme7n1 ended in about 0.96 seconds with error 00:29:07.336 Verification LBA range: start 0x0 length 0x400 00:29:07.336 Nvme7n1 : 0.96 201.02 12.56 67.01 0.00 208838.68 3883.61 259425.47 00:29:07.336 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.336 Job: Nvme8n1 ended in about 1.03 seconds with error 00:29:07.336 Verification LBA range: start 0x0 length 0x400 00:29:07.336 Nvme8n1 : 1.03 186.50 11.66 62.17 0.00 223012.98 19029.71 264085.81 00:29:07.336 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.336 Job: Nvme9n1 ended in about 1.03 seconds with error 00:29:07.336 Verification LBA range: start 0x0 length 0x400 00:29:07.336 Nvme9n1 : 1.03 123.95 7.75 61.97 0.00 292715.58 22233.69 298261.62 00:29:07.336 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:07.336 Job: Nvme10n1 ended in about 1.01 seconds with error 00:29:07.336 Verification LBA range: start 0x0 length 0x400 00:29:07.336 Nvme10n1 : 1.01 127.02 7.94 63.51 0.00 278643.04 19126.80 265639.25 00:29:07.336 [2024-10-12T23:39:52.914Z] =================================================================================================================== 00:29:07.336 [2024-10-12T23:39:52.914Z] Total : 1625.54 101.60 644.01 0.00 256263.48 3883.61 298261.62 00:29:07.336 [2024-10-13 01:39:52.712262] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:07.336 [2024-10-13 01:39:52.712359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:07.336 [2024-10-13 01:39:52.712708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.336 [2024-10-13 01:39:52.712748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4ebf50 with addr=10.0.0.2, port=4420 00:29:07.336 [2024-10-13 01:39:52.712770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4ebf50 is same with the state(6) to be set 00:29:07.336 [2024-10-13 01:39:52.712872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.336 [2024-10-13 01:39:52.712898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0fe50 with addr=10.0.0.2, port=4420 00:29:07.336 [2024-10-13 01:39:52.712914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0fe50 is same with the state(6) to be set 00:29:07.336 [2024-10-13 01:39:52.712975] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.713001] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.713022] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.713041] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.713068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0fe50 (9): Bad file descriptor 00:29:07.336 [2024-10-13 01:39:52.713100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ebf50 (9): Bad file descriptor 00:29:07.336 [2024-10-13 01:39:52.713398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.336 [2024-10-13 01:39:52.713429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4990 with addr=10.0.0.2, port=4420 00:29:07.336 [2024-10-13 01:39:52.713446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4990 is same with the state(6) to be set 00:29:07.336 [2024-10-13 01:39:52.713548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.336 [2024-10-13 01:39:52.713575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9dd570 with addr=10.0.0.2, port=4420 00:29:07.336 [2024-10-13 01:39:52.713591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd570 is same with the state(6) to be set 00:29:07.336 [2024-10-13 01:39:52.713677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.336 [2024-10-13 01:39:52.713702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa03ff0 with addr=10.0.0.2, port=4420 00:29:07.336 [2024-10-13 01:39:52.713719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa03ff0 is same with the state(6) to be set 00:29:07.336 [2024-10-13 01:39:52.713808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.336 [2024-10-13 01:39:52.713833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ff2d0 with addr=10.0.0.2, port=4420 00:29:07.336 [2024-10-13 01:39:52.713849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ff2d0 is same with the state(6) to be set 00:29:07.336 [2024-10-13 01:39:52.713869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.336 [2024-10-13 01:39:52.713884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.336 [2024-10-13 01:39:52.713902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.336 [2024-10-13 01:39:52.713926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:07.336 [2024-10-13 01:39:52.713940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:07.336 [2024-10-13 01:39:52.713952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:07.336 [2024-10-13 01:39:52.713972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:07.336 [2024-10-13 01:39:52.713986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:07.336 [2024-10-13 01:39:52.713999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:07.336 [2024-10-13 01:39:52.714016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:07.336 [2024-10-13 01:39:52.714030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:07.336 [2024-10-13 01:39:52.714043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:07.336 [2024-10-13 01:39:52.714077] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.714100] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.714121] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.714138] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.714158] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.714178] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:07.336 [2024-10-13 01:39:52.714814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.336 [2024-10-13 01:39:52.714839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.336 [2024-10-13 01:39:52.714852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.336 [2024-10-13 01:39:52.714864] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.336 [2024-10-13 01:39:52.714881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4990 (9): Bad file descriptor 00:29:07.336 [2024-10-13 01:39:52.714901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dd570 (9): Bad file descriptor 00:29:07.336 [2024-10-13 01:39:52.714920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa03ff0 (9): Bad file descriptor 00:29:07.336 [2024-10-13 01:39:52.714937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ff2d0 (9): Bad file descriptor 00:29:07.336 [2024-10-13 01:39:52.714953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:07.336 [2024-10-13 01:39:52.714966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:07.336 [2024-10-13 01:39:52.714979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:07.336 [2024-10-13 01:39:52.714998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:07.336 [2024-10-13 01:39:52.715011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:07.336 [2024-10-13 01:39:52.715024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:07.336 [2024-10-13 01:39:52.715358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.336 [2024-10-13 01:39:52.715382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.336 [2024-10-13 01:39:52.715396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:07.336 [2024-10-13 01:39:52.715409] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:07.336 [2024-10-13 01:39:52.715423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:07.336 [2024-10-13 01:39:52.715440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:07.337 [2024-10-13 01:39:52.715454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:07.337 [2024-10-13 01:39:52.715467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:07.337 [2024-10-13 01:39:52.715492] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:07.337 [2024-10-13 01:39:52.715506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:07.337 [2024-10-13 01:39:52.715518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:07.337 [2024-10-13 01:39:52.715535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:07.337 [2024-10-13 01:39:52.715549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:07.337 [2024-10-13 01:39:52.715562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:07.337 [2024-10-13 01:39:52.715616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.337 [2024-10-13 01:39:52.715635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.337 [2024-10-13 01:39:52.715647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.337 [2024-10-13 01:39:52.715664] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.596 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1689215 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1689215 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1689215 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:08.971 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.972 rmmod nvme_tcp 00:29:08.972 rmmod nvme_fabrics 00:29:08.972 rmmod nvme_keyring 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1689040 ']' 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1689040 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1689040 ']' 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1689040 00:29:08.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1689040) - No such process 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1689040 is not found' 00:29:08.972 Process with pid 1689040 is not found 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:08.972 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.873 00:29:10.873 real 0m7.133s 00:29:10.873 user 0m16.926s 00:29:10.873 sys 0m1.428s 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:10.873 ************************************ 00:29:10.873 END TEST nvmf_shutdown_tc3 00:29:10.873 ************************************ 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:10.873 ************************************ 00:29:10.873 START TEST nvmf_shutdown_tc4 00:29:10.873 ************************************ 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:10.873 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:10.873 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:10.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.873 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:10.874 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.874 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:29:11.132 00:29:11.132 --- 10.0.0.2 ping statistics --- 00:29:11.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.132 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:29:11.132 00:29:11.132 --- 10.0.0.1 ping statistics --- 00:29:11.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.132 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:11.132 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1690034 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1690034 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1690034 ']' 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:11.133 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:11.133 [2024-10-13 01:39:56.585812] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:11.133 [2024-10-13 01:39:56.585885] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.133 [2024-10-13 01:39:56.650919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.133 [2024-10-13 01:39:56.700238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.133 [2024-10-13 01:39:56.700299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.133 [2024-10-13 01:39:56.700329] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.133 [2024-10-13 01:39:56.700340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.133 [2024-10-13 01:39:56.700349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.133 [2024-10-13 01:39:56.702038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.133 [2024-10-13 01:39:56.702104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:11.133 [2024-10-13 01:39:56.702166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:11.133 [2024-10-13 01:39:56.702169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:11.391 [2024-10-13 01:39:56.850477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:11.391 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:11.391 Malloc1 00:29:11.391 [2024-10-13 01:39:56.953191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.649 Malloc2 00:29:11.649 Malloc3 00:29:11.649 Malloc4 00:29:11.649 Malloc5 00:29:11.649 Malloc6 00:29:11.649 Malloc7 00:29:11.907 Malloc8 00:29:11.907 Malloc9 00:29:11.907 Malloc10 00:29:11.907 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:11.907 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:11.907 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:11.907 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:11.907 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1690181 00:29:11.907 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:11.907 01:39:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:11.907 [2024-10-13 01:39:57.452801] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1690034 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1690034 ']' 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1690034 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1690034 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1690034' 00:29:17.176 killing process with pid 1690034 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1690034 00:29:17.176 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1690034 00:29:17.176 [2024-10-13 01:40:02.454635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.454715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.454740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.454753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.454765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.454777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.454789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.454801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.454834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.454849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735a10 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.455885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735ee0 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.455920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735ee0 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.455936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735ee0 is same with the state(6) to be set 00:29:17.176 [2024-10-13 01:40:02.455951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735ee0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.455963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735ee0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.455975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735ee0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.455988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735ee0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17363b0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17363b0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17363b0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17363b0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17363b0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17363b0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17363b0 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.457996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 [2024-10-13 01:40:02.458008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1735540 is same with the state(6) to be set 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 [2024-10-13 01:40:02.464026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 [2024-10-13 01:40:02.465255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 starting I/O failed: -6 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.177 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 [2024-10-13 01:40:02.466493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 [2024-10-13 01:40:02.468056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19575f0 is same with the state(6) to be set 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 [2024-10-13 01:40:02.468095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19575f0 is same with the state(6) to be set 00:29:17.178 starting I/O failed: -6 00:29:17.178 [2024-10-13 01:40:02.468112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19575f0 is same with the state(6) to be set 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 [2024-10-13 01:40:02.468124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19575f0 is same with the state(6) to be set 00:29:17.178 starting I/O failed: -6 00:29:17.178 [2024-10-13 01:40:02.468136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19575f0 is same with the state(6) to be set 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 [2024-10-13 01:40:02.468326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.178 NVMe io qpair process completion error 00:29:17.178 [2024-10-13 01:40:02.468515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 [2024-10-13 01:40:02.468737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ac0 is same with the state(6) to be set 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 starting I/O failed: -6 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.178 Write completed with error (sct=0, sc=8) 00:29:17.179 [2024-10-13 01:40:02.469290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957f90 is same with starting I/O failed: -6 00:29:17.179 the state(6) to be set 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 [2024-10-13 01:40:02.469324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957f90 is same with the state(6) to be set 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 [2024-10-13 01:40:02.469340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957f90 is same with the state(6) to be set 00:29:17.179 [2024-10-13 01:40:02.469353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957f90 is same with the state(6) to be set 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 [2024-10-13 01:40:02.469366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957f90 is same with the state(6) to be set 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 [2024-10-13 01:40:02.469378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957f90 is same with the state(6) to be set 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 [2024-10-13 01:40:02.469600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.179 [2024-10-13 01:40:02.469769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957120 is same with the state(6) to be set 00:29:17.179 [2024-10-13 01:40:02.469800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957120 is same with Write completed with error (sct=0, sc=8) 00:29:17.179 the state(6) to be set 00:29:17.179 starting I/O failed: -6 00:29:17.179 [2024-10-13 01:40:02.469825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957120 is same with the state(6) to be set 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 [2024-10-13 01:40:02.469839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957120 is same with the state(6) to be set 00:29:17.179 starting I/O failed: -6 00:29:17.179 [2024-10-13 01:40:02.469851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957120 is same with the state(6) to be set 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 [2024-10-13 01:40:02.469864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957120 is same with the state(6) to be set 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 [2024-10-13 01:40:02.470615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 [2024-10-13 01:40:02.471802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.179 starting I/O failed: -6 00:29:17.179 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 [2024-10-13 01:40:02.473766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.180 NVMe io qpair process completion error 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 [2024-10-13 01:40:02.475010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 [2024-10-13 01:40:02.476020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.180 Write completed with error (sct=0, sc=8) 00:29:17.180 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 [2024-10-13 01:40:02.477174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 [2024-10-13 01:40:02.479624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.181 NVMe io qpair process completion error 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 Write completed with error (sct=0, sc=8) 00:29:17.181 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 [2024-10-13 01:40:02.480889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 [2024-10-13 01:40:02.482005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.182 starting I/O failed: -6 00:29:17.182 starting I/O failed: -6 00:29:17.182 starting I/O failed: -6 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 [2024-10-13 01:40:02.483433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.182 starting I/O failed: -6 00:29:17.182 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 [2024-10-13 01:40:02.485187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.183 NVMe io qpair process completion error 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 [2024-10-13 01:40:02.486575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 [2024-10-13 01:40:02.487637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 Write completed with error (sct=0, sc=8) 00:29:17.183 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 [2024-10-13 01:40:02.488816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 [2024-10-13 01:40:02.490591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.184 NVMe io qpair process completion error 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 [2024-10-13 01:40:02.491922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.184 Write completed with error (sct=0, sc=8) 00:29:17.184 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 [2024-10-13 01:40:02.492885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 [2024-10-13 01:40:02.494063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.185 starting I/O failed: -6 00:29:17.185 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 [2024-10-13 01:40:02.497518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.186 NVMe io qpair process completion error 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 [2024-10-13 01:40:02.498840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.186 starting I/O failed: -6 00:29:17.186 starting I/O failed: -6 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 [2024-10-13 01:40:02.499939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 [2024-10-13 01:40:02.501160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.186 starting I/O failed: -6 00:29:17.186 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 [2024-10-13 01:40:02.504731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.187 NVMe io qpair process completion error 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 [2024-10-13 01:40:02.505988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.187 [2024-10-13 01:40:02.507075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.187 Write completed with error (sct=0, sc=8) 00:29:17.187 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 [2024-10-13 01:40:02.508233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 [2024-10-13 01:40:02.510212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.188 NVMe io qpair process completion error 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 starting I/O failed: -6 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.188 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 [2024-10-13 01:40:02.511528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 [2024-10-13 01:40:02.512594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 [2024-10-13 01:40:02.513800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.189 Write completed with error (sct=0, sc=8) 00:29:17.189 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 [2024-10-13 01:40:02.515732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.190 NVMe io qpair process completion error 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 [2024-10-13 01:40:02.517132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 [2024-10-13 01:40:02.518225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 Write completed with error (sct=0, sc=8) 00:29:17.190 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 [2024-10-13 01:40:02.519427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 Write completed with error (sct=0, sc=8) 00:29:17.191 starting I/O failed: -6 00:29:17.191 [2024-10-13 01:40:02.521365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.191 NVMe io qpair process completion error 00:29:17.191 Initializing NVMe Controllers 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:17.191 Controller IO queue size 128, less than required. 00:29:17.191 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:17.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:17.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:17.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:17.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:17.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:17.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:17.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:17.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:17.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:17.192 Initialization complete. Launching workers. 00:29:17.192 ======================================================== 00:29:17.192 Latency(us) 00:29:17.192 Device Information : IOPS MiB/s Average min max 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1799.98 77.34 71135.65 1021.36 122700.54 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1787.30 76.80 71666.98 1046.76 124149.62 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1791.74 76.99 71520.42 822.94 120966.60 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1791.11 76.96 71589.13 768.62 131016.94 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1830.82 78.67 70087.03 1117.30 116744.56 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1787.94 76.83 71796.38 917.58 137783.04 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1815.61 78.01 70731.34 818.98 140205.96 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1872.87 80.47 67754.50 1116.63 115619.51 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1839.91 79.06 68994.46 865.43 116746.30 00:29:17.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1829.98 78.63 69396.33 949.50 117065.11 00:29:17.192 ======================================================== 00:29:17.192 Total : 18147.26 779.77 70448.15 768.62 140205.96 00:29:17.192 00:29:17.192 [2024-10-13 01:40:02.525712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2031e00 is same with the state(6) to be set 00:29:17.192 [2024-10-13 01:40:02.525800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036d00 is same with the state(6) to be set 00:29:17.192 [2024-10-13 01:40:02.525858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20143f0 is same with the state(6) to be set 00:29:17.192 [2024-10-13 01:40:02.525914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202cf00 is same with the state(6) to be set 00:29:17.192 [2024-10-13 01:40:02.525970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203bc00 is same with the state(6) to be set 00:29:17.192 [2024-10-13 01:40:02.526026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2040b00 is same with the state(6) to be set 00:29:17.192 [2024-10-13 01:40:02.526080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2028000 is same with the state(6) to be set 00:29:17.192 [2024-10-13 01:40:02.526136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2019300 is same with the state(6) to be set 00:29:17.192 [2024-10-13 01:40:02.526190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201e200 is same with the state(6) to be set 00:29:17.192 [2024-10-13 01:40:02.526246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023100 is same with the state(6) to be set 00:29:17.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:17.451 01:40:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1690181 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1690181 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1690181 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.386 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:18.645 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.645 01:40:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.645 rmmod nvme_tcp 00:29:18.645 rmmod nvme_fabrics 00:29:18.645 rmmod nvme_keyring 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1690034 ']' 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1690034 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1690034 ']' 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1690034 00:29:18.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1690034) - No such process 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1690034 is not found' 00:29:18.645 Process with pid 1690034 is not found 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.645 01:40:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.545 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.545 00:29:20.545 real 0m9.790s 00:29:20.545 user 0m22.991s 00:29:20.545 sys 0m5.857s 00:29:20.545 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.545 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:20.545 ************************************ 00:29:20.545 END TEST nvmf_shutdown_tc4 00:29:20.545 ************************************ 00:29:20.545 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:20.545 00:29:20.545 real 0m36.487s 00:29:20.545 user 1m36.101s 00:29:20.545 sys 0m12.124s 00:29:20.545 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.545 01:40:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:20.545 ************************************ 00:29:20.545 END TEST nvmf_shutdown 00:29:20.545 ************************************ 00:29:20.804 01:40:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:20.804 00:29:20.804 real 18m2.500s 00:29:20.804 user 50m30.366s 00:29:20.804 sys 3m53.317s 00:29:20.804 01:40:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.804 01:40:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:20.804 ************************************ 00:29:20.804 END TEST nvmf_target_extra 00:29:20.804 ************************************ 00:29:20.804 01:40:06 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:20.804 01:40:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:20.804 01:40:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.804 01:40:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.804 ************************************ 00:29:20.804 START TEST nvmf_host 00:29:20.804 ************************************ 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:20.804 * Looking for test storage... 00:29:20.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:20.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.804 --rc genhtml_branch_coverage=1 00:29:20.804 --rc genhtml_function_coverage=1 00:29:20.804 --rc genhtml_legend=1 00:29:20.804 --rc geninfo_all_blocks=1 00:29:20.804 --rc geninfo_unexecuted_blocks=1 00:29:20.804 00:29:20.804 ' 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:20.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.804 --rc genhtml_branch_coverage=1 00:29:20.804 --rc genhtml_function_coverage=1 00:29:20.804 --rc genhtml_legend=1 00:29:20.804 --rc geninfo_all_blocks=1 00:29:20.804 --rc geninfo_unexecuted_blocks=1 00:29:20.804 00:29:20.804 ' 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:20.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.804 --rc genhtml_branch_coverage=1 00:29:20.804 --rc genhtml_function_coverage=1 00:29:20.804 --rc genhtml_legend=1 00:29:20.804 --rc geninfo_all_blocks=1 00:29:20.804 --rc geninfo_unexecuted_blocks=1 00:29:20.804 00:29:20.804 ' 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:20.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.804 --rc genhtml_branch_coverage=1 00:29:20.804 --rc genhtml_function_coverage=1 00:29:20.804 --rc genhtml_legend=1 00:29:20.804 --rc geninfo_all_blocks=1 00:29:20.804 --rc geninfo_unexecuted_blocks=1 00:29:20.804 00:29:20.804 ' 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.804 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.805 ************************************ 00:29:20.805 START TEST nvmf_multicontroller 00:29:20.805 ************************************ 00:29:20.805 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:21.064 * Looking for test storage... 00:29:21.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:21.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.064 --rc genhtml_branch_coverage=1 00:29:21.064 --rc genhtml_function_coverage=1 00:29:21.064 --rc genhtml_legend=1 00:29:21.064 --rc geninfo_all_blocks=1 00:29:21.064 --rc geninfo_unexecuted_blocks=1 00:29:21.064 00:29:21.064 ' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:21.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.064 --rc genhtml_branch_coverage=1 00:29:21.064 --rc genhtml_function_coverage=1 00:29:21.064 --rc genhtml_legend=1 00:29:21.064 --rc geninfo_all_blocks=1 00:29:21.064 --rc geninfo_unexecuted_blocks=1 00:29:21.064 00:29:21.064 ' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:21.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.064 --rc genhtml_branch_coverage=1 00:29:21.064 --rc genhtml_function_coverage=1 00:29:21.064 --rc genhtml_legend=1 00:29:21.064 --rc geninfo_all_blocks=1 00:29:21.064 --rc geninfo_unexecuted_blocks=1 00:29:21.064 00:29:21.064 ' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:21.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.064 --rc genhtml_branch_coverage=1 00:29:21.064 --rc genhtml_function_coverage=1 00:29:21.064 --rc genhtml_legend=1 00:29:21.064 --rc geninfo_all_blocks=1 00:29:21.064 --rc geninfo_unexecuted_blocks=1 00:29:21.064 00:29:21.064 ' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:21.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:21.064 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.065 01:40:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:22.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:22.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:22.964 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:22.964 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:22.965 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.965 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.223 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.223 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.223 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:23.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:29:23.224 00:29:23.224 --- 10.0.0.2 ping statistics --- 00:29:23.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.224 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:23.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:29:23.224 00:29:23.224 --- 10.0.0.1 ping statistics --- 00:29:23.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.224 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1692973 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1692973 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1692973 ']' 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:23.224 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.224 [2024-10-13 01:40:08.730741] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:23.224 [2024-10-13 01:40:08.730826] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.224 [2024-10-13 01:40:08.794515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:23.483 [2024-10-13 01:40:08.840640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.483 [2024-10-13 01:40:08.840692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.483 [2024-10-13 01:40:08.840721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.483 [2024-10-13 01:40:08.840731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.483 [2024-10-13 01:40:08.840741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.483 [2024-10-13 01:40:08.842156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.483 [2024-10-13 01:40:08.842233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.483 [2024-10-13 01:40:08.842236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.483 [2024-10-13 01:40:08.991360] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.483 01:40:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.483 Malloc0 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.483 [2024-10-13 01:40:09.056375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.483 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.742 [2024-10-13 01:40:09.064324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.742 Malloc1 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1693047 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1693047 /var/tmp/bdevperf.sock 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1693047 ']' 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:23.742 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.001 NVMe0n1 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.001 1 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.001 request: 00:29:24.001 { 00:29:24.001 "name": "NVMe0", 00:29:24.001 "trtype": "tcp", 00:29:24.001 "traddr": "10.0.0.2", 00:29:24.001 "adrfam": "ipv4", 00:29:24.001 "trsvcid": "4420", 00:29:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.001 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:24.001 "hostaddr": "10.0.0.1", 00:29:24.001 "prchk_reftag": false, 00:29:24.001 "prchk_guard": false, 00:29:24.001 "hdgst": false, 00:29:24.001 "ddgst": false, 00:29:24.001 "allow_unrecognized_csi": false, 00:29:24.001 "method": "bdev_nvme_attach_controller", 00:29:24.001 "req_id": 1 00:29:24.001 } 00:29:24.001 Got JSON-RPC error response 00:29:24.001 response: 00:29:24.001 { 00:29:24.001 "code": -114, 00:29:24.001 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:24.001 } 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.001 request: 00:29:24.001 { 00:29:24.001 "name": "NVMe0", 00:29:24.001 "trtype": "tcp", 00:29:24.001 "traddr": "10.0.0.2", 00:29:24.001 "adrfam": "ipv4", 00:29:24.001 "trsvcid": "4420", 00:29:24.001 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:24.001 "hostaddr": "10.0.0.1", 00:29:24.001 "prchk_reftag": false, 00:29:24.001 "prchk_guard": false, 00:29:24.001 "hdgst": false, 00:29:24.001 "ddgst": false, 00:29:24.001 "allow_unrecognized_csi": false, 00:29:24.001 "method": "bdev_nvme_attach_controller", 00:29:24.001 "req_id": 1 00:29:24.001 } 00:29:24.001 Got JSON-RPC error response 00:29:24.001 response: 00:29:24.001 { 00:29:24.001 "code": -114, 00:29:24.001 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:24.001 } 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:24.001 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.002 request: 00:29:24.002 { 00:29:24.002 "name": "NVMe0", 00:29:24.002 "trtype": "tcp", 00:29:24.002 "traddr": "10.0.0.2", 00:29:24.002 "adrfam": "ipv4", 00:29:24.002 "trsvcid": "4420", 00:29:24.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.002 "hostaddr": "10.0.0.1", 00:29:24.002 "prchk_reftag": false, 00:29:24.002 "prchk_guard": false, 00:29:24.002 "hdgst": false, 00:29:24.002 "ddgst": false, 00:29:24.002 "multipath": "disable", 00:29:24.002 "allow_unrecognized_csi": false, 00:29:24.002 "method": "bdev_nvme_attach_controller", 00:29:24.002 "req_id": 1 00:29:24.002 } 00:29:24.002 Got JSON-RPC error response 00:29:24.002 response: 00:29:24.002 { 00:29:24.002 "code": -114, 00:29:24.002 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:24.002 } 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.002 request: 00:29:24.002 { 00:29:24.002 "name": "NVMe0", 00:29:24.002 "trtype": "tcp", 00:29:24.002 "traddr": "10.0.0.2", 00:29:24.002 "adrfam": "ipv4", 00:29:24.002 "trsvcid": "4420", 00:29:24.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.002 "hostaddr": "10.0.0.1", 00:29:24.002 "prchk_reftag": false, 00:29:24.002 "prchk_guard": false, 00:29:24.002 "hdgst": false, 00:29:24.002 "ddgst": false, 00:29:24.002 "multipath": "failover", 00:29:24.002 "allow_unrecognized_csi": false, 00:29:24.002 "method": "bdev_nvme_attach_controller", 00:29:24.002 "req_id": 1 00:29:24.002 } 00:29:24.002 Got JSON-RPC error response 00:29:24.002 response: 00:29:24.002 { 00:29:24.002 "code": -114, 00:29:24.002 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:24.002 } 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.002 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.260 NVMe0n1 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.260 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.260 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.518 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.518 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:24.518 01:40:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:25.453 { 00:29:25.453 "results": [ 00:29:25.453 { 00:29:25.453 "job": "NVMe0n1", 00:29:25.453 "core_mask": "0x1", 00:29:25.453 "workload": "write", 00:29:25.453 "status": "finished", 00:29:25.453 "queue_depth": 128, 00:29:25.453 "io_size": 4096, 00:29:25.453 "runtime": 1.009605, 00:29:25.453 "iops": 18434.932473591158, 00:29:25.453 "mibps": 72.01145497496546, 00:29:25.453 "io_failed": 0, 00:29:25.453 "io_timeout": 0, 00:29:25.453 "avg_latency_us": 6931.8433367560565, 00:29:25.453 "min_latency_us": 4271.976296296296, 00:29:25.453 "max_latency_us": 14272.284444444444 00:29:25.453 } 00:29:25.453 ], 00:29:25.453 "core_count": 1 00:29:25.453 } 00:29:25.453 01:40:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:25.453 01:40:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.453 01:40:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.453 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.453 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:25.453 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1693047 00:29:25.453 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1693047 ']' 00:29:25.453 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1693047 00:29:25.453 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:25.453 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.453 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693047 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693047' 00:29:25.711 killing process with pid 1693047 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1693047 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1693047 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:29:25.711 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:25.711 [2024-10-13 01:40:09.168141] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:25.711 [2024-10-13 01:40:09.168251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693047 ] 00:29:25.711 [2024-10-13 01:40:09.231424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.711 [2024-10-13 01:40:09.278035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.711 [2024-10-13 01:40:09.834736] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 428a9d98-2092-47a5-949f-768a0c90d29f already exists 00:29:25.711 [2024-10-13 01:40:09.834786] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:428a9d98-2092-47a5-949f-768a0c90d29f alias for bdev NVMe1n1 00:29:25.711 [2024-10-13 01:40:09.834814] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:25.711 Running I/O for 1 seconds... 00:29:25.711 18357.00 IOPS, 71.71 MiB/s 00:29:25.711 Latency(us) 00:29:25.711 [2024-10-12T23:40:11.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.711 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:25.711 NVMe0n1 : 1.01 18434.93 72.01 0.00 0.00 6931.84 4271.98 14272.28 00:29:25.711 [2024-10-12T23:40:11.289Z] =================================================================================================================== 00:29:25.711 [2024-10-12T23:40:11.289Z] Total : 18434.93 72.01 0.00 0.00 6931.84 4271.98 14272.28 00:29:25.711 Received shutdown signal, test time was about 1.000000 seconds 00:29:25.711 00:29:25.711 Latency(us) 00:29:25.711 [2024-10-12T23:40:11.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.711 [2024-10-12T23:40:11.289Z] =================================================================================================================== 00:29:25.711 [2024-10-12T23:40:11.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.711 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.711 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.711 rmmod nvme_tcp 00:29:25.711 rmmod nvme_fabrics 00:29:25.711 rmmod nvme_keyring 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1692973 ']' 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1692973 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1692973 ']' 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1692973 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1692973 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1692973' 00:29:25.969 killing process with pid 1692973 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1692973 00:29:25.969 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1692973 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.227 01:40:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.126 01:40:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.126 00:29:28.126 real 0m7.262s 00:29:28.126 user 0m11.146s 00:29:28.126 sys 0m2.240s 00:29:28.126 01:40:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.126 01:40:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.126 ************************************ 00:29:28.126 END TEST nvmf_multicontroller 00:29:28.126 ************************************ 00:29:28.126 01:40:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:28.126 01:40:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:28.126 01:40:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.126 01:40:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.126 ************************************ 00:29:28.126 START TEST nvmf_aer 00:29:28.126 ************************************ 00:29:28.126 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:28.385 * Looking for test storage... 00:29:28.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:28.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.385 --rc genhtml_branch_coverage=1 00:29:28.385 --rc genhtml_function_coverage=1 00:29:28.385 --rc genhtml_legend=1 00:29:28.385 --rc geninfo_all_blocks=1 00:29:28.385 --rc geninfo_unexecuted_blocks=1 00:29:28.385 00:29:28.385 ' 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:28.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.385 --rc genhtml_branch_coverage=1 00:29:28.385 --rc genhtml_function_coverage=1 00:29:28.385 --rc genhtml_legend=1 00:29:28.385 --rc geninfo_all_blocks=1 00:29:28.385 --rc geninfo_unexecuted_blocks=1 00:29:28.385 00:29:28.385 ' 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:28.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.385 --rc genhtml_branch_coverage=1 00:29:28.385 --rc genhtml_function_coverage=1 00:29:28.385 --rc genhtml_legend=1 00:29:28.385 --rc geninfo_all_blocks=1 00:29:28.385 --rc geninfo_unexecuted_blocks=1 00:29:28.385 00:29:28.385 ' 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:28.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.385 --rc genhtml_branch_coverage=1 00:29:28.385 --rc genhtml_function_coverage=1 00:29:28.385 --rc genhtml_legend=1 00:29:28.385 --rc geninfo_all_blocks=1 00:29:28.385 --rc geninfo_unexecuted_blocks=1 00:29:28.385 00:29:28.385 ' 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.385 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.386 01:40:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:30.339 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:30.339 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:30.339 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:30.339 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.339 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.340 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:30.340 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:30.340 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.340 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.340 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.340 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.340 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:30.340 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:30.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:29:30.597 00:29:30.597 --- 10.0.0.2 ping statistics --- 00:29:30.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.597 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:30.597 00:29:30.597 --- 10.0.0.1 ping statistics --- 00:29:30.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.597 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1695246 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1695246 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1695246 ']' 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.597 01:40:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.597 [2024-10-13 01:40:16.041208] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:30.597 [2024-10-13 01:40:16.041290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.597 [2024-10-13 01:40:16.109751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.597 [2024-10-13 01:40:16.159294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.597 [2024-10-13 01:40:16.159358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.597 [2024-10-13 01:40:16.159375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.597 [2024-10-13 01:40:16.159388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.597 [2024-10-13 01:40:16.159400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.597 [2024-10-13 01:40:16.161092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.597 [2024-10-13 01:40:16.161146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.597 [2024-10-13 01:40:16.161268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.597 [2024-10-13 01:40:16.161271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.855 [2024-10-13 01:40:16.304634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.855 Malloc0 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.855 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.856 [2024-10-13 01:40:16.372675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:30.856 [ 00:29:30.856 { 00:29:30.856 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:30.856 "subtype": "Discovery", 00:29:30.856 "listen_addresses": [], 00:29:30.856 "allow_any_host": true, 00:29:30.856 "hosts": [] 00:29:30.856 }, 00:29:30.856 { 00:29:30.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.856 "subtype": "NVMe", 00:29:30.856 "listen_addresses": [ 00:29:30.856 { 00:29:30.856 "trtype": "TCP", 00:29:30.856 "adrfam": "IPv4", 00:29:30.856 "traddr": "10.0.0.2", 00:29:30.856 "trsvcid": "4420" 00:29:30.856 } 00:29:30.856 ], 00:29:30.856 "allow_any_host": true, 00:29:30.856 "hosts": [], 00:29:30.856 "serial_number": "SPDK00000000000001", 00:29:30.856 "model_number": "SPDK bdev Controller", 00:29:30.856 "max_namespaces": 2, 00:29:30.856 "min_cntlid": 1, 00:29:30.856 "max_cntlid": 65519, 00:29:30.856 "namespaces": [ 00:29:30.856 { 00:29:30.856 "nsid": 1, 00:29:30.856 "bdev_name": "Malloc0", 00:29:30.856 "name": "Malloc0", 00:29:30.856 "nguid": "747F7948AE1C42C7A62389E1A9F5830B", 00:29:30.856 "uuid": "747f7948-ae1c-42c7-a623-89e1a9f5830b" 00:29:30.856 } 00:29:30.856 ] 00:29:30.856 } 00:29:30.856 ] 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1695361 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:29:30.856 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:31.114 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:31.114 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:29:31.114 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:29:31.114 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:31.114 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:31.114 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:29:31.114 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:29:31.114 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.372 Malloc1 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.372 Asynchronous Event Request test 00:29:31.372 Attaching to 10.0.0.2 00:29:31.372 Attached to 10.0.0.2 00:29:31.372 Registering asynchronous event callbacks... 00:29:31.372 Starting namespace attribute notice tests for all controllers... 00:29:31.372 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:31.372 aer_cb - Changed Namespace 00:29:31.372 Cleaning up... 00:29:31.372 [ 00:29:31.372 { 00:29:31.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:31.372 "subtype": "Discovery", 00:29:31.372 "listen_addresses": [], 00:29:31.372 "allow_any_host": true, 00:29:31.372 "hosts": [] 00:29:31.372 }, 00:29:31.372 { 00:29:31.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:31.372 "subtype": "NVMe", 00:29:31.372 "listen_addresses": [ 00:29:31.372 { 00:29:31.372 "trtype": "TCP", 00:29:31.372 "adrfam": "IPv4", 00:29:31.372 "traddr": "10.0.0.2", 00:29:31.372 "trsvcid": "4420" 00:29:31.372 } 00:29:31.372 ], 00:29:31.372 "allow_any_host": true, 00:29:31.372 "hosts": [], 00:29:31.372 "serial_number": "SPDK00000000000001", 00:29:31.372 "model_number": "SPDK bdev Controller", 00:29:31.372 "max_namespaces": 2, 00:29:31.372 "min_cntlid": 1, 00:29:31.372 "max_cntlid": 65519, 00:29:31.372 "namespaces": [ 00:29:31.372 { 00:29:31.372 "nsid": 1, 00:29:31.372 "bdev_name": "Malloc0", 00:29:31.372 "name": "Malloc0", 00:29:31.372 "nguid": "747F7948AE1C42C7A62389E1A9F5830B", 00:29:31.372 "uuid": "747f7948-ae1c-42c7-a623-89e1a9f5830b" 00:29:31.372 }, 00:29:31.372 { 00:29:31.372 "nsid": 2, 00:29:31.372 "bdev_name": "Malloc1", 00:29:31.372 "name": "Malloc1", 00:29:31.372 "nguid": "CA8699F753B1413B82422BFA53C67640", 00:29:31.372 "uuid": "ca8699f7-53b1-413b-8242-2bfa53c67640" 00:29:31.372 } 00:29:31.372 ] 00:29:31.372 } 00:29:31.372 ] 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1695361 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.372 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:31.373 rmmod nvme_tcp 00:29:31.373 rmmod nvme_fabrics 00:29:31.373 rmmod nvme_keyring 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1695246 ']' 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1695246 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1695246 ']' 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1695246 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1695246 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1695246' 00:29:31.373 killing process with pid 1695246 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1695246 00:29:31.373 01:40:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1695246 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.631 01:40:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.160 00:29:34.160 real 0m5.485s 00:29:34.160 user 0m4.583s 00:29:34.160 sys 0m1.939s 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.160 ************************************ 00:29:34.160 END TEST nvmf_aer 00:29:34.160 ************************************ 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.160 ************************************ 00:29:34.160 START TEST nvmf_async_init 00:29:34.160 ************************************ 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:34.160 * Looking for test storage... 00:29:34.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:34.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.160 --rc genhtml_branch_coverage=1 00:29:34.160 --rc genhtml_function_coverage=1 00:29:34.160 --rc genhtml_legend=1 00:29:34.160 --rc geninfo_all_blocks=1 00:29:34.160 --rc geninfo_unexecuted_blocks=1 00:29:34.160 00:29:34.160 ' 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:34.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.160 --rc genhtml_branch_coverage=1 00:29:34.160 --rc genhtml_function_coverage=1 00:29:34.160 --rc genhtml_legend=1 00:29:34.160 --rc geninfo_all_blocks=1 00:29:34.160 --rc geninfo_unexecuted_blocks=1 00:29:34.160 00:29:34.160 ' 00:29:34.160 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:34.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.160 --rc genhtml_branch_coverage=1 00:29:34.160 --rc genhtml_function_coverage=1 00:29:34.160 --rc genhtml_legend=1 00:29:34.161 --rc geninfo_all_blocks=1 00:29:34.161 --rc geninfo_unexecuted_blocks=1 00:29:34.161 00:29:34.161 ' 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:34.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:34.161 --rc genhtml_branch_coverage=1 00:29:34.161 --rc genhtml_function_coverage=1 00:29:34.161 --rc genhtml_legend=1 00:29:34.161 --rc geninfo_all_blocks=1 00:29:34.161 --rc geninfo_unexecuted_blocks=1 00:29:34.161 00:29:34.161 ' 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:34.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4007691081c44a31ba78e2bcdf482c84 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:34.161 01:40:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:36.065 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:36.065 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:36.065 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:36.065 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:36.065 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:36.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:36.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:29:36.066 00:29:36.066 --- 10.0.0.2 ping statistics --- 00:29:36.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.066 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:36.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:36.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:29:36.066 00:29:36.066 --- 10.0.0.1 ping statistics --- 00:29:36.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:36.066 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1697311 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1697311 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1697311 ']' 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:36.066 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.066 [2024-10-13 01:40:21.581008] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:36.066 [2024-10-13 01:40:21.581094] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.325 [2024-10-13 01:40:21.649120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.325 [2024-10-13 01:40:21.697909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.325 [2024-10-13 01:40:21.697962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.325 [2024-10-13 01:40:21.697976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.325 [2024-10-13 01:40:21.698002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.325 [2024-10-13 01:40:21.698012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.325 [2024-10-13 01:40:21.698627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.325 [2024-10-13 01:40:21.851321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.325 null0 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4007691081c44a31ba78e2bcdf482c84 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.325 [2024-10-13 01:40:21.891641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.325 01:40:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.584 nvme0n1 00:29:36.584 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.584 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:36.584 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.584 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.584 [ 00:29:36.584 { 00:29:36.584 "name": "nvme0n1", 00:29:36.584 "aliases": [ 00:29:36.584 "40076910-81c4-4a31-ba78-e2bcdf482c84" 00:29:36.584 ], 00:29:36.584 "product_name": "NVMe disk", 00:29:36.584 "block_size": 512, 00:29:36.584 "num_blocks": 2097152, 00:29:36.584 "uuid": "40076910-81c4-4a31-ba78-e2bcdf482c84", 00:29:36.584 "numa_id": 0, 00:29:36.584 "assigned_rate_limits": { 00:29:36.584 "rw_ios_per_sec": 0, 00:29:36.584 "rw_mbytes_per_sec": 0, 00:29:36.584 "r_mbytes_per_sec": 0, 00:29:36.584 "w_mbytes_per_sec": 0 00:29:36.584 }, 00:29:36.584 "claimed": false, 00:29:36.584 "zoned": false, 00:29:36.584 "supported_io_types": { 00:29:36.584 "read": true, 00:29:36.584 "write": true, 00:29:36.584 "unmap": false, 00:29:36.584 "flush": true, 00:29:36.584 "reset": true, 00:29:36.584 "nvme_admin": true, 00:29:36.584 "nvme_io": true, 00:29:36.584 "nvme_io_md": false, 00:29:36.584 "write_zeroes": true, 00:29:36.584 "zcopy": false, 00:29:36.584 "get_zone_info": false, 00:29:36.584 "zone_management": false, 00:29:36.584 "zone_append": false, 00:29:36.584 "compare": true, 00:29:36.584 "compare_and_write": true, 00:29:36.584 "abort": true, 00:29:36.584 "seek_hole": false, 00:29:36.584 "seek_data": false, 00:29:36.584 "copy": true, 00:29:36.584 "nvme_iov_md": false 00:29:36.584 }, 00:29:36.584 "memory_domains": [ 00:29:36.584 { 00:29:36.584 "dma_device_id": "system", 00:29:36.584 "dma_device_type": 1 00:29:36.584 } 00:29:36.584 ], 00:29:36.584 "driver_specific": { 00:29:36.584 "nvme": [ 00:29:36.584 { 00:29:36.584 "trid": { 00:29:36.584 "trtype": "TCP", 00:29:36.584 "adrfam": "IPv4", 00:29:36.584 "traddr": "10.0.0.2", 00:29:36.584 "trsvcid": "4420", 00:29:36.584 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:36.584 }, 00:29:36.584 "ctrlr_data": { 00:29:36.584 "cntlid": 1, 00:29:36.584 "vendor_id": "0x8086", 00:29:36.584 "model_number": "SPDK bdev Controller", 00:29:36.584 "serial_number": "00000000000000000000", 00:29:36.584 "firmware_revision": "25.01", 00:29:36.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.584 "oacs": { 00:29:36.584 "security": 0, 00:29:36.584 "format": 0, 00:29:36.584 "firmware": 0, 00:29:36.584 "ns_manage": 0 00:29:36.584 }, 00:29:36.584 "multi_ctrlr": true, 00:29:36.584 "ana_reporting": false 00:29:36.584 }, 00:29:36.584 "vs": { 00:29:36.584 "nvme_version": "1.3" 00:29:36.584 }, 00:29:36.584 "ns_data": { 00:29:36.584 "id": 1, 00:29:36.584 "can_share": true 00:29:36.584 } 00:29:36.584 } 00:29:36.584 ], 00:29:36.584 "mp_policy": "active_passive" 00:29:36.584 } 00:29:36.584 } 00:29:36.584 ] 00:29:36.584 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.584 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:36.584 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.584 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.584 [2024-10-13 01:40:22.140191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.584 [2024-10-13 01:40:22.140266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6e7c0 (9): Bad file descriptor 00:29:36.843 [2024-10-13 01:40:22.272650] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.843 [ 00:29:36.843 { 00:29:36.843 "name": "nvme0n1", 00:29:36.843 "aliases": [ 00:29:36.843 "40076910-81c4-4a31-ba78-e2bcdf482c84" 00:29:36.843 ], 00:29:36.843 "product_name": "NVMe disk", 00:29:36.843 "block_size": 512, 00:29:36.843 "num_blocks": 2097152, 00:29:36.843 "uuid": "40076910-81c4-4a31-ba78-e2bcdf482c84", 00:29:36.843 "numa_id": 0, 00:29:36.843 "assigned_rate_limits": { 00:29:36.843 "rw_ios_per_sec": 0, 00:29:36.843 "rw_mbytes_per_sec": 0, 00:29:36.843 "r_mbytes_per_sec": 0, 00:29:36.843 "w_mbytes_per_sec": 0 00:29:36.843 }, 00:29:36.843 "claimed": false, 00:29:36.843 "zoned": false, 00:29:36.843 "supported_io_types": { 00:29:36.843 "read": true, 00:29:36.843 "write": true, 00:29:36.843 "unmap": false, 00:29:36.843 "flush": true, 00:29:36.843 "reset": true, 00:29:36.843 "nvme_admin": true, 00:29:36.843 "nvme_io": true, 00:29:36.843 "nvme_io_md": false, 00:29:36.843 "write_zeroes": true, 00:29:36.843 "zcopy": false, 00:29:36.843 "get_zone_info": false, 00:29:36.843 "zone_management": false, 00:29:36.843 "zone_append": false, 00:29:36.843 "compare": true, 00:29:36.843 "compare_and_write": true, 00:29:36.843 "abort": true, 00:29:36.843 "seek_hole": false, 00:29:36.843 "seek_data": false, 00:29:36.843 "copy": true, 00:29:36.843 "nvme_iov_md": false 00:29:36.843 }, 00:29:36.843 "memory_domains": [ 00:29:36.843 { 00:29:36.843 "dma_device_id": "system", 00:29:36.843 "dma_device_type": 1 00:29:36.843 } 00:29:36.843 ], 00:29:36.843 "driver_specific": { 00:29:36.843 "nvme": [ 00:29:36.843 { 00:29:36.843 "trid": { 00:29:36.843 "trtype": "TCP", 00:29:36.843 "adrfam": "IPv4", 00:29:36.843 "traddr": "10.0.0.2", 00:29:36.843 "trsvcid": "4420", 00:29:36.843 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:36.843 }, 00:29:36.843 "ctrlr_data": { 00:29:36.843 "cntlid": 2, 00:29:36.843 "vendor_id": "0x8086", 00:29:36.843 "model_number": "SPDK bdev Controller", 00:29:36.843 "serial_number": "00000000000000000000", 00:29:36.843 "firmware_revision": "25.01", 00:29:36.843 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.843 "oacs": { 00:29:36.843 "security": 0, 00:29:36.843 "format": 0, 00:29:36.843 "firmware": 0, 00:29:36.843 "ns_manage": 0 00:29:36.843 }, 00:29:36.843 "multi_ctrlr": true, 00:29:36.843 "ana_reporting": false 00:29:36.843 }, 00:29:36.843 "vs": { 00:29:36.843 "nvme_version": "1.3" 00:29:36.843 }, 00:29:36.843 "ns_data": { 00:29:36.843 "id": 1, 00:29:36.843 "can_share": true 00:29:36.843 } 00:29:36.843 } 00:29:36.843 ], 00:29:36.843 "mp_policy": "active_passive" 00:29:36.843 } 00:29:36.843 } 00:29:36.843 ] 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xfUEsm5jbw 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xfUEsm5jbw 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.xfUEsm5jbw 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.843 [2024-10-13 01:40:22.328866] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:36.843 [2024-10-13 01:40:22.328985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.843 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.844 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:36.844 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.844 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:36.844 [2024-10-13 01:40:22.344894] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:36.844 nvme0n1 00:29:36.844 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.844 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:36.844 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.844 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:37.102 [ 00:29:37.102 { 00:29:37.102 "name": "nvme0n1", 00:29:37.102 "aliases": [ 00:29:37.102 "40076910-81c4-4a31-ba78-e2bcdf482c84" 00:29:37.102 ], 00:29:37.102 "product_name": "NVMe disk", 00:29:37.102 "block_size": 512, 00:29:37.102 "num_blocks": 2097152, 00:29:37.102 "uuid": "40076910-81c4-4a31-ba78-e2bcdf482c84", 00:29:37.102 "numa_id": 0, 00:29:37.102 "assigned_rate_limits": { 00:29:37.102 "rw_ios_per_sec": 0, 00:29:37.102 "rw_mbytes_per_sec": 0, 00:29:37.102 "r_mbytes_per_sec": 0, 00:29:37.102 "w_mbytes_per_sec": 0 00:29:37.102 }, 00:29:37.102 "claimed": false, 00:29:37.102 "zoned": false, 00:29:37.102 "supported_io_types": { 00:29:37.102 "read": true, 00:29:37.102 "write": true, 00:29:37.102 "unmap": false, 00:29:37.102 "flush": true, 00:29:37.102 "reset": true, 00:29:37.102 "nvme_admin": true, 00:29:37.102 "nvme_io": true, 00:29:37.102 "nvme_io_md": false, 00:29:37.102 "write_zeroes": true, 00:29:37.102 "zcopy": false, 00:29:37.102 "get_zone_info": false, 00:29:37.102 "zone_management": false, 00:29:37.102 "zone_append": false, 00:29:37.102 "compare": true, 00:29:37.102 "compare_and_write": true, 00:29:37.102 "abort": true, 00:29:37.102 "seek_hole": false, 00:29:37.102 "seek_data": false, 00:29:37.102 "copy": true, 00:29:37.102 "nvme_iov_md": false 00:29:37.102 }, 00:29:37.102 "memory_domains": [ 00:29:37.102 { 00:29:37.102 "dma_device_id": "system", 00:29:37.102 "dma_device_type": 1 00:29:37.102 } 00:29:37.102 ], 00:29:37.102 "driver_specific": { 00:29:37.102 "nvme": [ 00:29:37.102 { 00:29:37.102 "trid": { 00:29:37.102 "trtype": "TCP", 00:29:37.102 "adrfam": "IPv4", 00:29:37.102 "traddr": "10.0.0.2", 00:29:37.102 "trsvcid": "4421", 00:29:37.102 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:37.102 }, 00:29:37.102 "ctrlr_data": { 00:29:37.102 "cntlid": 3, 00:29:37.102 "vendor_id": "0x8086", 00:29:37.102 "model_number": "SPDK bdev Controller", 00:29:37.102 "serial_number": "00000000000000000000", 00:29:37.102 "firmware_revision": "25.01", 00:29:37.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.102 "oacs": { 00:29:37.102 "security": 0, 00:29:37.102 "format": 0, 00:29:37.102 "firmware": 0, 00:29:37.102 "ns_manage": 0 00:29:37.102 }, 00:29:37.102 "multi_ctrlr": true, 00:29:37.102 "ana_reporting": false 00:29:37.102 }, 00:29:37.102 "vs": { 00:29:37.102 "nvme_version": "1.3" 00:29:37.102 }, 00:29:37.102 "ns_data": { 00:29:37.102 "id": 1, 00:29:37.102 "can_share": true 00:29:37.102 } 00:29:37.102 } 00:29:37.102 ], 00:29:37.102 "mp_policy": "active_passive" 00:29:37.102 } 00:29:37.102 } 00:29:37.102 ] 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.xfUEsm5jbw 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.102 rmmod nvme_tcp 00:29:37.102 rmmod nvme_fabrics 00:29:37.102 rmmod nvme_keyring 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1697311 ']' 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1697311 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1697311 ']' 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1697311 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697311 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:37.102 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697311' 00:29:37.103 killing process with pid 1697311 00:29:37.103 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1697311 00:29:37.103 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1697311 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.361 01:40:22 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.263 01:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.263 00:29:39.263 real 0m5.555s 00:29:39.263 user 0m2.102s 00:29:39.263 sys 0m1.891s 00:29:39.263 01:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.263 01:40:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:39.263 ************************************ 00:29:39.263 END TEST nvmf_async_init 00:29:39.263 ************************************ 00:29:39.263 01:40:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:39.263 01:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:39.263 01:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.263 01:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.263 ************************************ 00:29:39.263 START TEST dma 00:29:39.263 ************************************ 00:29:39.263 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:39.522 * Looking for test storage... 00:29:39.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:39.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.522 --rc genhtml_branch_coverage=1 00:29:39.522 --rc genhtml_function_coverage=1 00:29:39.522 --rc genhtml_legend=1 00:29:39.522 --rc geninfo_all_blocks=1 00:29:39.522 --rc geninfo_unexecuted_blocks=1 00:29:39.522 00:29:39.522 ' 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:39.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.522 --rc genhtml_branch_coverage=1 00:29:39.522 --rc genhtml_function_coverage=1 00:29:39.522 --rc genhtml_legend=1 00:29:39.522 --rc geninfo_all_blocks=1 00:29:39.522 --rc geninfo_unexecuted_blocks=1 00:29:39.522 00:29:39.522 ' 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:39.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.522 --rc genhtml_branch_coverage=1 00:29:39.522 --rc genhtml_function_coverage=1 00:29:39.522 --rc genhtml_legend=1 00:29:39.522 --rc geninfo_all_blocks=1 00:29:39.522 --rc geninfo_unexecuted_blocks=1 00:29:39.522 00:29:39.522 ' 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:39.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.522 --rc genhtml_branch_coverage=1 00:29:39.522 --rc genhtml_function_coverage=1 00:29:39.522 --rc genhtml_legend=1 00:29:39.522 --rc geninfo_all_blocks=1 00:29:39.522 --rc geninfo_unexecuted_blocks=1 00:29:39.522 00:29:39.522 ' 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.522 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:39.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:39.523 00:29:39.523 real 0m0.150s 00:29:39.523 user 0m0.100s 00:29:39.523 sys 0m0.058s 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:39.523 ************************************ 00:29:39.523 END TEST dma 00:29:39.523 ************************************ 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.523 ************************************ 00:29:39.523 START TEST nvmf_identify 00:29:39.523 ************************************ 00:29:39.523 01:40:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:39.523 * Looking for test storage... 00:29:39.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.523 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:39.523 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:29:39.523 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:39.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.782 --rc genhtml_branch_coverage=1 00:29:39.782 --rc genhtml_function_coverage=1 00:29:39.782 --rc genhtml_legend=1 00:29:39.782 --rc geninfo_all_blocks=1 00:29:39.782 --rc geninfo_unexecuted_blocks=1 00:29:39.782 00:29:39.782 ' 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:39.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.782 --rc genhtml_branch_coverage=1 00:29:39.782 --rc genhtml_function_coverage=1 00:29:39.782 --rc genhtml_legend=1 00:29:39.782 --rc geninfo_all_blocks=1 00:29:39.782 --rc geninfo_unexecuted_blocks=1 00:29:39.782 00:29:39.782 ' 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:39.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.782 --rc genhtml_branch_coverage=1 00:29:39.782 --rc genhtml_function_coverage=1 00:29:39.782 --rc genhtml_legend=1 00:29:39.782 --rc geninfo_all_blocks=1 00:29:39.782 --rc geninfo_unexecuted_blocks=1 00:29:39.782 00:29:39.782 ' 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:39.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.782 --rc genhtml_branch_coverage=1 00:29:39.782 --rc genhtml_function_coverage=1 00:29:39.782 --rc genhtml_legend=1 00:29:39.782 --rc geninfo_all_blocks=1 00:29:39.782 --rc geninfo_unexecuted_blocks=1 00:29:39.782 00:29:39.782 ' 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.782 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:39.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.783 01:40:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.684 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.684 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.684 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.684 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.684 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:29:41.943 00:29:41.943 --- 10.0.0.2 ping statistics --- 00:29:41.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.943 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:41.943 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:29:42.201 00:29:42.201 --- 10.0.0.1 ping statistics --- 00:29:42.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.201 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:42.201 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.201 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1699567 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1699567 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1699567 ']' 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:42.202 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.202 [2024-10-13 01:40:27.609966] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:42.202 [2024-10-13 01:40:27.610068] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.202 [2024-10-13 01:40:27.676196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.202 [2024-10-13 01:40:27.726339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.202 [2024-10-13 01:40:27.726397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.202 [2024-10-13 01:40:27.726431] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.202 [2024-10-13 01:40:27.726444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.202 [2024-10-13 01:40:27.726453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.202 [2024-10-13 01:40:27.728030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.202 [2024-10-13 01:40:27.728095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.202 [2024-10-13 01:40:27.728172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.202 [2024-10-13 01:40:27.728175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.460 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:42.460 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:29:42.460 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.460 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.460 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.460 [2024-10-13 01:40:27.854158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.460 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.460 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:42.460 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.461 Malloc0 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.461 [2024-10-13 01:40:27.950667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.461 [ 00:29:42.461 { 00:29:42.461 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:42.461 "subtype": "Discovery", 00:29:42.461 "listen_addresses": [ 00:29:42.461 { 00:29:42.461 "trtype": "TCP", 00:29:42.461 "adrfam": "IPv4", 00:29:42.461 "traddr": "10.0.0.2", 00:29:42.461 "trsvcid": "4420" 00:29:42.461 } 00:29:42.461 ], 00:29:42.461 "allow_any_host": true, 00:29:42.461 "hosts": [] 00:29:42.461 }, 00:29:42.461 { 00:29:42.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.461 "subtype": "NVMe", 00:29:42.461 "listen_addresses": [ 00:29:42.461 { 00:29:42.461 "trtype": "TCP", 00:29:42.461 "adrfam": "IPv4", 00:29:42.461 "traddr": "10.0.0.2", 00:29:42.461 "trsvcid": "4420" 00:29:42.461 } 00:29:42.461 ], 00:29:42.461 "allow_any_host": true, 00:29:42.461 "hosts": [], 00:29:42.461 "serial_number": "SPDK00000000000001", 00:29:42.461 "model_number": "SPDK bdev Controller", 00:29:42.461 "max_namespaces": 32, 00:29:42.461 "min_cntlid": 1, 00:29:42.461 "max_cntlid": 65519, 00:29:42.461 "namespaces": [ 00:29:42.461 { 00:29:42.461 "nsid": 1, 00:29:42.461 "bdev_name": "Malloc0", 00:29:42.461 "name": "Malloc0", 00:29:42.461 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:42.461 "eui64": "ABCDEF0123456789", 00:29:42.461 "uuid": "2acb38d3-f0f4-4fba-89e9-d53484d03896" 00:29:42.461 } 00:29:42.461 ] 00:29:42.461 } 00:29:42.461 ] 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.461 01:40:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:42.461 [2024-10-13 01:40:27.993637] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:42.461 [2024-10-13 01:40:27.993683] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699589 ] 00:29:42.461 [2024-10-13 01:40:28.025612] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:42.461 [2024-10-13 01:40:28.025685] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:42.461 [2024-10-13 01:40:28.025696] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:42.461 [2024-10-13 01:40:28.025712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:42.461 [2024-10-13 01:40:28.025726] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:42.461 [2024-10-13 01:40:28.029925] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:42.461 [2024-10-13 01:40:28.029997] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc14720 0 00:29:42.461 [2024-10-13 01:40:28.037495] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:42.461 [2024-10-13 01:40:28.037535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:42.461 [2024-10-13 01:40:28.037546] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:42.461 [2024-10-13 01:40:28.037553] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:42.461 [2024-10-13 01:40:28.037597] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.461 [2024-10-13 01:40:28.037612] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.461 [2024-10-13 01:40:28.037620] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.461 [2024-10-13 01:40:28.037639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:42.461 [2024-10-13 01:40:28.037667] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.721 [2024-10-13 01:40:28.044500] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.721 [2024-10-13 01:40:28.044522] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.721 [2024-10-13 01:40:28.044531] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.044539] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.721 [2024-10-13 01:40:28.044577] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:42.721 [2024-10-13 01:40:28.044595] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:42.721 [2024-10-13 01:40:28.044607] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:42.721 [2024-10-13 01:40:28.044631] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.044640] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.044647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.721 [2024-10-13 01:40:28.044659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.721 [2024-10-13 01:40:28.044685] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.721 [2024-10-13 01:40:28.044869] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.721 [2024-10-13 01:40:28.044884] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.721 [2024-10-13 01:40:28.044891] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.044898] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.721 [2024-10-13 01:40:28.044908] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:42.721 [2024-10-13 01:40:28.044921] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:42.721 [2024-10-13 01:40:28.044934] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.044941] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.044947] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.721 [2024-10-13 01:40:28.044959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.721 [2024-10-13 01:40:28.044981] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.721 [2024-10-13 01:40:28.045121] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.721 [2024-10-13 01:40:28.045133] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.721 [2024-10-13 01:40:28.045140] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045147] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.721 [2024-10-13 01:40:28.045157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:42.721 [2024-10-13 01:40:28.045170] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:42.721 [2024-10-13 01:40:28.045183] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045190] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.721 [2024-10-13 01:40:28.045207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.721 [2024-10-13 01:40:28.045227] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.721 [2024-10-13 01:40:28.045314] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.721 [2024-10-13 01:40:28.045326] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.721 [2024-10-13 01:40:28.045333] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045340] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.721 [2024-10-13 01:40:28.045350] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:42.721 [2024-10-13 01:40:28.045370] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045380] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045386] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.721 [2024-10-13 01:40:28.045397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.721 [2024-10-13 01:40:28.045418] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.721 [2024-10-13 01:40:28.045512] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.721 [2024-10-13 01:40:28.045527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.721 [2024-10-13 01:40:28.045534] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045541] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.721 [2024-10-13 01:40:28.045550] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:42.721 [2024-10-13 01:40:28.045559] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:42.721 [2024-10-13 01:40:28.045572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:42.721 [2024-10-13 01:40:28.045682] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:42.721 [2024-10-13 01:40:28.045690] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:42.721 [2024-10-13 01:40:28.045706] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045713] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045720] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.721 [2024-10-13 01:40:28.045730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.721 [2024-10-13 01:40:28.045752] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.721 [2024-10-13 01:40:28.045881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.721 [2024-10-13 01:40:28.045895] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.721 [2024-10-13 01:40:28.045902] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045909] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.721 [2024-10-13 01:40:28.045918] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:42.721 [2024-10-13 01:40:28.045934] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045943] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.045949] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.721 [2024-10-13 01:40:28.045960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.721 [2024-10-13 01:40:28.045980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.721 [2024-10-13 01:40:28.046082] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.721 [2024-10-13 01:40:28.046096] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.721 [2024-10-13 01:40:28.046103] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.046110] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.721 [2024-10-13 01:40:28.046123] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:42.721 [2024-10-13 01:40:28.046132] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:42.721 [2024-10-13 01:40:28.046145] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:42.721 [2024-10-13 01:40:28.046159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:42.721 [2024-10-13 01:40:28.046176] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.046184] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.721 [2024-10-13 01:40:28.046194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.721 [2024-10-13 01:40:28.046215] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.721 [2024-10-13 01:40:28.046345] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.721 [2024-10-13 01:40:28.046360] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.721 [2024-10-13 01:40:28.046367] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.046374] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc14720): datao=0, datal=4096, cccid=0 00:29:42.721 [2024-10-13 01:40:28.046382] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6d300) on tqpair(0xc14720): expected_datao=0, payload_size=4096 00:29:42.721 [2024-10-13 01:40:28.046391] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.046409] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.046419] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.088498] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.721 [2024-10-13 01:40:28.088525] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.721 [2024-10-13 01:40:28.088533] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.721 [2024-10-13 01:40:28.088540] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.721 [2024-10-13 01:40:28.088553] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:42.722 [2024-10-13 01:40:28.088562] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:42.722 [2024-10-13 01:40:28.088569] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:42.722 [2024-10-13 01:40:28.088578] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:42.722 [2024-10-13 01:40:28.088585] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:42.722 [2024-10-13 01:40:28.088593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:42.722 [2024-10-13 01:40:28.088609] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:42.722 [2024-10-13 01:40:28.088638] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088646] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088653] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.088666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:42.722 [2024-10-13 01:40:28.088690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.722 [2024-10-13 01:40:28.088829] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.722 [2024-10-13 01:40:28.088842] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.722 [2024-10-13 01:40:28.088849] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088856] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.722 [2024-10-13 01:40:28.088868] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088876] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088882] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.088892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.722 [2024-10-13 01:40:28.088902] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088909] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088915] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.088924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.722 [2024-10-13 01:40:28.088933] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088940] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088946] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.088954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.722 [2024-10-13 01:40:28.088964] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088971] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.088977] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.088986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.722 [2024-10-13 01:40:28.088995] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:42.722 [2024-10-13 01:40:28.089015] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:42.722 [2024-10-13 01:40:28.089028] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089035] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.089046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.722 [2024-10-13 01:40:28.089084] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d300, cid 0, qid 0 00:29:42.722 [2024-10-13 01:40:28.089096] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d480, cid 1, qid 0 00:29:42.722 [2024-10-13 01:40:28.089103] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d600, cid 2, qid 0 00:29:42.722 [2024-10-13 01:40:28.089110] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.722 [2024-10-13 01:40:28.089117] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d900, cid 4, qid 0 00:29:42.722 [2024-10-13 01:40:28.089319] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.722 [2024-10-13 01:40:28.089334] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.722 [2024-10-13 01:40:28.089341] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089348] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d900) on tqpair=0xc14720 00:29:42.722 [2024-10-13 01:40:28.089363] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:42.722 [2024-10-13 01:40:28.089373] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:42.722 [2024-10-13 01:40:28.089390] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089400] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.089410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.722 [2024-10-13 01:40:28.089433] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d900, cid 4, qid 0 00:29:42.722 [2024-10-13 01:40:28.089570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.722 [2024-10-13 01:40:28.089586] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.722 [2024-10-13 01:40:28.089593] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089599] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc14720): datao=0, datal=4096, cccid=4 00:29:42.722 [2024-10-13 01:40:28.089608] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6d900) on tqpair(0xc14720): expected_datao=0, payload_size=4096 00:29:42.722 [2024-10-13 01:40:28.089615] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089626] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089634] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089654] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.722 [2024-10-13 01:40:28.089667] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.722 [2024-10-13 01:40:28.089674] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089680] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d900) on tqpair=0xc14720 00:29:42.722 [2024-10-13 01:40:28.089700] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:42.722 [2024-10-13 01:40:28.089754] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089765] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.089776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.722 [2024-10-13 01:40:28.089787] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089794] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.089801] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.089809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.722 [2024-10-13 01:40:28.089831] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d900, cid 4, qid 0 00:29:42.722 [2024-10-13 01:40:28.089843] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6da80, cid 5, qid 0 00:29:42.722 [2024-10-13 01:40:28.090019] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.722 [2024-10-13 01:40:28.090031] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.722 [2024-10-13 01:40:28.090037] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.090044] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc14720): datao=0, datal=1024, cccid=4 00:29:42.722 [2024-10-13 01:40:28.090051] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6d900) on tqpair(0xc14720): expected_datao=0, payload_size=1024 00:29:42.722 [2024-10-13 01:40:28.090058] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.090072] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.090080] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.090089] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.722 [2024-10-13 01:40:28.090098] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.722 [2024-10-13 01:40:28.090104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.090111] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6da80) on tqpair=0xc14720 00:29:42.722 [2024-10-13 01:40:28.134507] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.722 [2024-10-13 01:40:28.134527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.722 [2024-10-13 01:40:28.134535] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.134542] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d900) on tqpair=0xc14720 00:29:42.722 [2024-10-13 01:40:28.134561] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.722 [2024-10-13 01:40:28.134570] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc14720) 00:29:42.722 [2024-10-13 01:40:28.134582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.722 [2024-10-13 01:40:28.134613] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d900, cid 4, qid 0 00:29:42.722 [2024-10-13 01:40:28.134727] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.722 [2024-10-13 01:40:28.134743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.722 [2024-10-13 01:40:28.134750] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.134757] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc14720): datao=0, datal=3072, cccid=4 00:29:42.723 [2024-10-13 01:40:28.134765] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6d900) on tqpair(0xc14720): expected_datao=0, payload_size=3072 00:29:42.723 [2024-10-13 01:40:28.134772] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.134792] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.134801] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.175555] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.723 [2024-10-13 01:40:28.175575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.723 [2024-10-13 01:40:28.175582] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.175590] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d900) on tqpair=0xc14720 00:29:42.723 [2024-10-13 01:40:28.175606] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.175615] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc14720) 00:29:42.723 [2024-10-13 01:40:28.175626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.723 [2024-10-13 01:40:28.175657] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d900, cid 4, qid 0 00:29:42.723 [2024-10-13 01:40:28.175747] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.723 [2024-10-13 01:40:28.175760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.723 [2024-10-13 01:40:28.175766] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.175773] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc14720): datao=0, datal=8, cccid=4 00:29:42.723 [2024-10-13 01:40:28.175780] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6d900) on tqpair(0xc14720): expected_datao=0, payload_size=8 00:29:42.723 [2024-10-13 01:40:28.175787] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.175797] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.175809] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.219501] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.723 [2024-10-13 01:40:28.219520] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.723 [2024-10-13 01:40:28.219528] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.723 [2024-10-13 01:40:28.219535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d900) on tqpair=0xc14720 00:29:42.723 ===================================================== 00:29:42.723 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:42.723 ===================================================== 00:29:42.723 Controller Capabilities/Features 00:29:42.723 ================================ 00:29:42.723 Vendor ID: 0000 00:29:42.723 Subsystem Vendor ID: 0000 00:29:42.723 Serial Number: .................... 00:29:42.723 Model Number: ........................................ 00:29:42.723 Firmware Version: 25.01 00:29:42.723 Recommended Arb Burst: 0 00:29:42.723 IEEE OUI Identifier: 00 00 00 00:29:42.723 Multi-path I/O 00:29:42.723 May have multiple subsystem ports: No 00:29:42.723 May have multiple controllers: No 00:29:42.723 Associated with SR-IOV VF: No 00:29:42.723 Max Data Transfer Size: 131072 00:29:42.723 Max Number of Namespaces: 0 00:29:42.723 Max Number of I/O Queues: 1024 00:29:42.723 NVMe Specification Version (VS): 1.3 00:29:42.723 NVMe Specification Version (Identify): 1.3 00:29:42.723 Maximum Queue Entries: 128 00:29:42.723 Contiguous Queues Required: Yes 00:29:42.723 Arbitration Mechanisms Supported 00:29:42.723 Weighted Round Robin: Not Supported 00:29:42.723 Vendor Specific: Not Supported 00:29:42.723 Reset Timeout: 15000 ms 00:29:42.723 Doorbell Stride: 4 bytes 00:29:42.723 NVM Subsystem Reset: Not Supported 00:29:42.723 Command Sets Supported 00:29:42.723 NVM Command Set: Supported 00:29:42.723 Boot Partition: Not Supported 00:29:42.723 Memory Page Size Minimum: 4096 bytes 00:29:42.723 Memory Page Size Maximum: 4096 bytes 00:29:42.723 Persistent Memory Region: Not Supported 00:29:42.723 Optional Asynchronous Events Supported 00:29:42.723 Namespace Attribute Notices: Not Supported 00:29:42.723 Firmware Activation Notices: Not Supported 00:29:42.723 ANA Change Notices: Not Supported 00:29:42.723 PLE Aggregate Log Change Notices: Not Supported 00:29:42.723 LBA Status Info Alert Notices: Not Supported 00:29:42.723 EGE Aggregate Log Change Notices: Not Supported 00:29:42.723 Normal NVM Subsystem Shutdown event: Not Supported 00:29:42.723 Zone Descriptor Change Notices: Not Supported 00:29:42.723 Discovery Log Change Notices: Supported 00:29:42.723 Controller Attributes 00:29:42.723 128-bit Host Identifier: Not Supported 00:29:42.723 Non-Operational Permissive Mode: Not Supported 00:29:42.723 NVM Sets: Not Supported 00:29:42.723 Read Recovery Levels: Not Supported 00:29:42.723 Endurance Groups: Not Supported 00:29:42.723 Predictable Latency Mode: Not Supported 00:29:42.723 Traffic Based Keep ALive: Not Supported 00:29:42.723 Namespace Granularity: Not Supported 00:29:42.723 SQ Associations: Not Supported 00:29:42.723 UUID List: Not Supported 00:29:42.723 Multi-Domain Subsystem: Not Supported 00:29:42.723 Fixed Capacity Management: Not Supported 00:29:42.723 Variable Capacity Management: Not Supported 00:29:42.723 Delete Endurance Group: Not Supported 00:29:42.723 Delete NVM Set: Not Supported 00:29:42.723 Extended LBA Formats Supported: Not Supported 00:29:42.723 Flexible Data Placement Supported: Not Supported 00:29:42.723 00:29:42.723 Controller Memory Buffer Support 00:29:42.723 ================================ 00:29:42.723 Supported: No 00:29:42.723 00:29:42.723 Persistent Memory Region Support 00:29:42.723 ================================ 00:29:42.723 Supported: No 00:29:42.723 00:29:42.723 Admin Command Set Attributes 00:29:42.723 ============================ 00:29:42.723 Security Send/Receive: Not Supported 00:29:42.723 Format NVM: Not Supported 00:29:42.723 Firmware Activate/Download: Not Supported 00:29:42.723 Namespace Management: Not Supported 00:29:42.723 Device Self-Test: Not Supported 00:29:42.723 Directives: Not Supported 00:29:42.723 NVMe-MI: Not Supported 00:29:42.723 Virtualization Management: Not Supported 00:29:42.723 Doorbell Buffer Config: Not Supported 00:29:42.723 Get LBA Status Capability: Not Supported 00:29:42.723 Command & Feature Lockdown Capability: Not Supported 00:29:42.723 Abort Command Limit: 1 00:29:42.723 Async Event Request Limit: 4 00:29:42.723 Number of Firmware Slots: N/A 00:29:42.723 Firmware Slot 1 Read-Only: N/A 00:29:42.723 Firmware Activation Without Reset: N/A 00:29:42.723 Multiple Update Detection Support: N/A 00:29:42.723 Firmware Update Granularity: No Information Provided 00:29:42.723 Per-Namespace SMART Log: No 00:29:42.723 Asymmetric Namespace Access Log Page: Not Supported 00:29:42.723 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:42.723 Command Effects Log Page: Not Supported 00:29:42.723 Get Log Page Extended Data: Supported 00:29:42.723 Telemetry Log Pages: Not Supported 00:29:42.723 Persistent Event Log Pages: Not Supported 00:29:42.723 Supported Log Pages Log Page: May Support 00:29:42.723 Commands Supported & Effects Log Page: Not Supported 00:29:42.723 Feature Identifiers & Effects Log Page:May Support 00:29:42.723 NVMe-MI Commands & Effects Log Page: May Support 00:29:42.723 Data Area 4 for Telemetry Log: Not Supported 00:29:42.723 Error Log Page Entries Supported: 128 00:29:42.723 Keep Alive: Not Supported 00:29:42.723 00:29:42.723 NVM Command Set Attributes 00:29:42.723 ========================== 00:29:42.723 Submission Queue Entry Size 00:29:42.723 Max: 1 00:29:42.723 Min: 1 00:29:42.723 Completion Queue Entry Size 00:29:42.723 Max: 1 00:29:42.723 Min: 1 00:29:42.723 Number of Namespaces: 0 00:29:42.723 Compare Command: Not Supported 00:29:42.723 Write Uncorrectable Command: Not Supported 00:29:42.723 Dataset Management Command: Not Supported 00:29:42.723 Write Zeroes Command: Not Supported 00:29:42.723 Set Features Save Field: Not Supported 00:29:42.723 Reservations: Not Supported 00:29:42.723 Timestamp: Not Supported 00:29:42.723 Copy: Not Supported 00:29:42.723 Volatile Write Cache: Not Present 00:29:42.723 Atomic Write Unit (Normal): 1 00:29:42.723 Atomic Write Unit (PFail): 1 00:29:42.723 Atomic Compare & Write Unit: 1 00:29:42.723 Fused Compare & Write: Supported 00:29:42.723 Scatter-Gather List 00:29:42.723 SGL Command Set: Supported 00:29:42.723 SGL Keyed: Supported 00:29:42.723 SGL Bit Bucket Descriptor: Not Supported 00:29:42.723 SGL Metadata Pointer: Not Supported 00:29:42.723 Oversized SGL: Not Supported 00:29:42.723 SGL Metadata Address: Not Supported 00:29:42.723 SGL Offset: Supported 00:29:42.723 Transport SGL Data Block: Not Supported 00:29:42.723 Replay Protected Memory Block: Not Supported 00:29:42.723 00:29:42.723 Firmware Slot Information 00:29:42.723 ========================= 00:29:42.723 Active slot: 0 00:29:42.723 00:29:42.723 00:29:42.723 Error Log 00:29:42.723 ========= 00:29:42.723 00:29:42.723 Active Namespaces 00:29:42.723 ================= 00:29:42.723 Discovery Log Page 00:29:42.723 ================== 00:29:42.723 Generation Counter: 2 00:29:42.723 Number of Records: 2 00:29:42.724 Record Format: 0 00:29:42.724 00:29:42.724 Discovery Log Entry 0 00:29:42.724 ---------------------- 00:29:42.724 Transport Type: 3 (TCP) 00:29:42.724 Address Family: 1 (IPv4) 00:29:42.724 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:42.724 Entry Flags: 00:29:42.724 Duplicate Returned Information: 1 00:29:42.724 Explicit Persistent Connection Support for Discovery: 1 00:29:42.724 Transport Requirements: 00:29:42.724 Secure Channel: Not Required 00:29:42.724 Port ID: 0 (0x0000) 00:29:42.724 Controller ID: 65535 (0xffff) 00:29:42.724 Admin Max SQ Size: 128 00:29:42.724 Transport Service Identifier: 4420 00:29:42.724 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:42.724 Transport Address: 10.0.0.2 00:29:42.724 Discovery Log Entry 1 00:29:42.724 ---------------------- 00:29:42.724 Transport Type: 3 (TCP) 00:29:42.724 Address Family: 1 (IPv4) 00:29:42.724 Subsystem Type: 2 (NVM Subsystem) 00:29:42.724 Entry Flags: 00:29:42.724 Duplicate Returned Information: 0 00:29:42.724 Explicit Persistent Connection Support for Discovery: 0 00:29:42.724 Transport Requirements: 00:29:42.724 Secure Channel: Not Required 00:29:42.724 Port ID: 0 (0x0000) 00:29:42.724 Controller ID: 65535 (0xffff) 00:29:42.724 Admin Max SQ Size: 128 00:29:42.724 Transport Service Identifier: 4420 00:29:42.724 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:42.724 Transport Address: 10.0.0.2 [2024-10-13 01:40:28.219653] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:42.724 [2024-10-13 01:40:28.219676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d300) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.219690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.724 [2024-10-13 01:40:28.219699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d480) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.219706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.724 [2024-10-13 01:40:28.219715] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d600) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.219722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.724 [2024-10-13 01:40:28.219730] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.219737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.724 [2024-10-13 01:40:28.219752] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.219759] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.219766] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.724 [2024-10-13 01:40:28.219777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.724 [2024-10-13 01:40:28.219818] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.724 [2024-10-13 01:40:28.219969] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.724 [2024-10-13 01:40:28.219981] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.724 [2024-10-13 01:40:28.219988] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.219995] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.220007] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220014] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220020] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.724 [2024-10-13 01:40:28.220031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.724 [2024-10-13 01:40:28.220057] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.724 [2024-10-13 01:40:28.220155] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.724 [2024-10-13 01:40:28.220169] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.724 [2024-10-13 01:40:28.220176] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220182] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.220192] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:42.724 [2024-10-13 01:40:28.220206] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:42.724 [2024-10-13 01:40:28.220223] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220235] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220242] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.724 [2024-10-13 01:40:28.220253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.724 [2024-10-13 01:40:28.220274] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.724 [2024-10-13 01:40:28.220367] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.724 [2024-10-13 01:40:28.220379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.724 [2024-10-13 01:40:28.220386] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220393] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.220410] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220418] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220424] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.724 [2024-10-13 01:40:28.220435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.724 [2024-10-13 01:40:28.220455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.724 [2024-10-13 01:40:28.220548] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.724 [2024-10-13 01:40:28.220563] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.724 [2024-10-13 01:40:28.220570] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220577] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.220593] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220602] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220609] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.724 [2024-10-13 01:40:28.220619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.724 [2024-10-13 01:40:28.220640] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.724 [2024-10-13 01:40:28.220719] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.724 [2024-10-13 01:40:28.220731] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.724 [2024-10-13 01:40:28.220738] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220744] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.220760] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220768] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.724 [2024-10-13 01:40:28.220785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.724 [2024-10-13 01:40:28.220805] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.724 [2024-10-13 01:40:28.220900] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.724 [2024-10-13 01:40:28.220913] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.724 [2024-10-13 01:40:28.220920] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220927] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.220943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220951] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.220964] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.724 [2024-10-13 01:40:28.220975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.724 [2024-10-13 01:40:28.220996] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.724 [2024-10-13 01:40:28.221077] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.724 [2024-10-13 01:40:28.221091] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.724 [2024-10-13 01:40:28.221098] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.221104] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.724 [2024-10-13 01:40:28.221121] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.221129] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.724 [2024-10-13 01:40:28.221135] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.724 [2024-10-13 01:40:28.221146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.724 [2024-10-13 01:40:28.221166] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.724 [2024-10-13 01:40:28.221238] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.724 [2024-10-13 01:40:28.221250] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.724 [2024-10-13 01:40:28.221257] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221263] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.221279] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221287] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221293] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.221304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.221324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.221417] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.221431] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.221438] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221444] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.221461] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221478] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221486] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.221497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.221518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.221597] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.221611] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.221617] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221624] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.221640] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221649] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221655] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.221669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.221690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.221765] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.221779] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.221786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221792] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.221808] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221817] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221823] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.221834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.221854] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.221936] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.221950] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.221956] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221963] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.221979] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221988] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.221994] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.222004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.222024] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.222097] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.222108] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.222115] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222121] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.222137] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222146] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222152] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.222162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.222182] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.222258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.222269] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.222276] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222283] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.222298] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222307] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222313] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.222323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.222347] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.222421] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.222433] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.222440] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222446] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.222462] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222485] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.222495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.222515] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.222594] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.222608] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.222615] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222622] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.222637] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222645] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222652] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.222662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.222682] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.222806] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.222818] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.222825] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.222847] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222856] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.222872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.725 [2024-10-13 01:40:28.222892] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.725 [2024-10-13 01:40:28.222966] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.725 [2024-10-13 01:40:28.222977] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.725 [2024-10-13 01:40:28.222984] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.222991] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.725 [2024-10-13 01:40:28.223006] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.223015] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.725 [2024-10-13 01:40:28.223021] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.725 [2024-10-13 01:40:28.223031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.726 [2024-10-13 01:40:28.223051] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.726 [2024-10-13 01:40:28.223133] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.726 [2024-10-13 01:40:28.223147] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.726 [2024-10-13 01:40:28.223154] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.223161] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.726 [2024-10-13 01:40:28.223177] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.223185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.223192] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.726 [2024-10-13 01:40:28.223202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.726 [2024-10-13 01:40:28.223222] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.726 [2024-10-13 01:40:28.223307] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.726 [2024-10-13 01:40:28.223320] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.726 [2024-10-13 01:40:28.223327] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.223334] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.726 [2024-10-13 01:40:28.223350] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.223358] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.223364] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.726 [2024-10-13 01:40:28.223375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.726 [2024-10-13 01:40:28.223395] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.726 [2024-10-13 01:40:28.227500] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.726 [2024-10-13 01:40:28.227517] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.726 [2024-10-13 01:40:28.227524] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.227531] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.726 [2024-10-13 01:40:28.227548] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.227558] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.227564] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc14720) 00:29:42.726 [2024-10-13 01:40:28.227575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.726 [2024-10-13 01:40:28.227597] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6d780, cid 3, qid 0 00:29:42.726 [2024-10-13 01:40:28.227722] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.726 [2024-10-13 01:40:28.227734] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.726 [2024-10-13 01:40:28.227741] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.726 [2024-10-13 01:40:28.227747] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc6d780) on tqpair=0xc14720 00:29:42.726 [2024-10-13 01:40:28.227760] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:29:42.726 00:29:42.726 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:42.726 [2024-10-13 01:40:28.263642] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:42.726 [2024-10-13 01:40:28.263690] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699596 ] 00:29:42.986 [2024-10-13 01:40:28.297451] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:42.987 [2024-10-13 01:40:28.297529] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:42.987 [2024-10-13 01:40:28.297542] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:42.987 [2024-10-13 01:40:28.297557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:42.987 [2024-10-13 01:40:28.297569] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:42.987 [2024-10-13 01:40:28.297978] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:42.987 [2024-10-13 01:40:28.298022] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2070720 0 00:29:42.987 [2024-10-13 01:40:28.308486] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:42.987 [2024-10-13 01:40:28.308508] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:42.987 [2024-10-13 01:40:28.308516] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:42.987 [2024-10-13 01:40:28.308522] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:42.987 [2024-10-13 01:40:28.308556] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.308567] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.308574] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.987 [2024-10-13 01:40:28.308588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:42.987 [2024-10-13 01:40:28.308614] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.987 [2024-10-13 01:40:28.319489] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.987 [2024-10-13 01:40:28.319508] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.987 [2024-10-13 01:40:28.319515] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.319522] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.987 [2024-10-13 01:40:28.319540] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:42.987 [2024-10-13 01:40:28.319550] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:42.987 [2024-10-13 01:40:28.319559] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:42.987 [2024-10-13 01:40:28.319577] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.319586] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.319592] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.987 [2024-10-13 01:40:28.319603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.987 [2024-10-13 01:40:28.319627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.987 [2024-10-13 01:40:28.319763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.987 [2024-10-13 01:40:28.319776] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.987 [2024-10-13 01:40:28.319782] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.319789] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.987 [2024-10-13 01:40:28.319801] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:42.987 [2024-10-13 01:40:28.319816] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:42.987 [2024-10-13 01:40:28.319828] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.319836] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.319842] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.987 [2024-10-13 01:40:28.319853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.987 [2024-10-13 01:40:28.319874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.987 [2024-10-13 01:40:28.319961] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.987 [2024-10-13 01:40:28.319973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.987 [2024-10-13 01:40:28.319980] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.319987] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.987 [2024-10-13 01:40:28.319995] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:42.987 [2024-10-13 01:40:28.320008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:42.987 [2024-10-13 01:40:28.320020] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320028] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320034] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.987 [2024-10-13 01:40:28.320045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.987 [2024-10-13 01:40:28.320065] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.987 [2024-10-13 01:40:28.320155] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.987 [2024-10-13 01:40:28.320167] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.987 [2024-10-13 01:40:28.320174] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320181] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.987 [2024-10-13 01:40:28.320189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:42.987 [2024-10-13 01:40:28.320205] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320214] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320221] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.987 [2024-10-13 01:40:28.320231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.987 [2024-10-13 01:40:28.320251] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.987 [2024-10-13 01:40:28.320337] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.987 [2024-10-13 01:40:28.320350] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.987 [2024-10-13 01:40:28.320356] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320363] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.987 [2024-10-13 01:40:28.320370] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:42.987 [2024-10-13 01:40:28.320378] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:42.987 [2024-10-13 01:40:28.320395] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:42.987 [2024-10-13 01:40:28.320506] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:42.987 [2024-10-13 01:40:28.320515] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:42.987 [2024-10-13 01:40:28.320527] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320534] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320541] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.987 [2024-10-13 01:40:28.320551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.987 [2024-10-13 01:40:28.320573] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.987 [2024-10-13 01:40:28.320692] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.987 [2024-10-13 01:40:28.320705] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.987 [2024-10-13 01:40:28.320711] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320718] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.987 [2024-10-13 01:40:28.320726] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:42.987 [2024-10-13 01:40:28.320742] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320751] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320757] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.987 [2024-10-13 01:40:28.320768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.987 [2024-10-13 01:40:28.320789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.987 [2024-10-13 01:40:28.320875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.987 [2024-10-13 01:40:28.320889] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.987 [2024-10-13 01:40:28.320896] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320902] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.987 [2024-10-13 01:40:28.320910] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:42.987 [2024-10-13 01:40:28.320918] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:42.987 [2024-10-13 01:40:28.320931] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:42.987 [2024-10-13 01:40:28.320949] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:42.987 [2024-10-13 01:40:28.320963] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.987 [2024-10-13 01:40:28.320971] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.988 [2024-10-13 01:40:28.320982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.988 [2024-10-13 01:40:28.321004] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.988 [2024-10-13 01:40:28.321136] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.988 [2024-10-13 01:40:28.321149] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.988 [2024-10-13 01:40:28.321155] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.321165] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2070720): datao=0, datal=4096, cccid=0 00:29:42.988 [2024-10-13 01:40:28.321174] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c9300) on tqpair(0x2070720): expected_datao=0, payload_size=4096 00:29:42.988 [2024-10-13 01:40:28.321181] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.321198] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.321207] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362568] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.988 [2024-10-13 01:40:28.362588] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.988 [2024-10-13 01:40:28.362596] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362603] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.988 [2024-10-13 01:40:28.362614] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:42.988 [2024-10-13 01:40:28.362623] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:42.988 [2024-10-13 01:40:28.362630] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:42.988 [2024-10-13 01:40:28.362637] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:42.988 [2024-10-13 01:40:28.362644] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:42.988 [2024-10-13 01:40:28.362652] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:42.988 [2024-10-13 01:40:28.362666] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:42.988 [2024-10-13 01:40:28.362678] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362686] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362692] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.988 [2024-10-13 01:40:28.362704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:42.988 [2024-10-13 01:40:28.362728] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.988 [2024-10-13 01:40:28.362810] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.988 [2024-10-13 01:40:28.362824] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.988 [2024-10-13 01:40:28.362831] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362838] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.988 [2024-10-13 01:40:28.362849] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362856] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362862] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2070720) 00:29:42.988 [2024-10-13 01:40:28.362872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.988 [2024-10-13 01:40:28.362882] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362889] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362895] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2070720) 00:29:42.988 [2024-10-13 01:40:28.362904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.988 [2024-10-13 01:40:28.362914] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362925] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2070720) 00:29:42.988 [2024-10-13 01:40:28.362941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.988 [2024-10-13 01:40:28.362951] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362957] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.362964] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.988 [2024-10-13 01:40:28.362972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.988 [2024-10-13 01:40:28.362981] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:42.988 [2024-10-13 01:40:28.363000] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:42.988 [2024-10-13 01:40:28.363013] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.363020] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2070720) 00:29:42.988 [2024-10-13 01:40:28.363046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.988 [2024-10-13 01:40:28.363069] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9300, cid 0, qid 0 00:29:42.988 [2024-10-13 01:40:28.363080] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9480, cid 1, qid 0 00:29:42.988 [2024-10-13 01:40:28.363087] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9600, cid 2, qid 0 00:29:42.988 [2024-10-13 01:40:28.363110] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.988 [2024-10-13 01:40:28.363118] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9900, cid 4, qid 0 00:29:42.988 [2024-10-13 01:40:28.363287] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.988 [2024-10-13 01:40:28.363301] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.988 [2024-10-13 01:40:28.363307] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.363314] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9900) on tqpair=0x2070720 00:29:42.988 [2024-10-13 01:40:28.363322] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:42.988 [2024-10-13 01:40:28.363331] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:42.988 [2024-10-13 01:40:28.363349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:42.988 [2024-10-13 01:40:28.363365] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:42.988 [2024-10-13 01:40:28.363377] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.363384] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.363391] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2070720) 00:29:42.988 [2024-10-13 01:40:28.363416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:42.988 [2024-10-13 01:40:28.363439] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9900, cid 4, qid 0 00:29:42.988 [2024-10-13 01:40:28.363606] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.988 [2024-10-13 01:40:28.363621] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.988 [2024-10-13 01:40:28.363632] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.363639] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9900) on tqpair=0x2070720 00:29:42.988 [2024-10-13 01:40:28.363708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:42.988 [2024-10-13 01:40:28.363727] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:42.988 [2024-10-13 01:40:28.363742] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.363749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2070720) 00:29:42.988 [2024-10-13 01:40:28.363760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.988 [2024-10-13 01:40:28.363782] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9900, cid 4, qid 0 00:29:42.988 [2024-10-13 01:40:28.363888] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.988 [2024-10-13 01:40:28.363901] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.988 [2024-10-13 01:40:28.363907] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.363914] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2070720): datao=0, datal=4096, cccid=4 00:29:42.988 [2024-10-13 01:40:28.363921] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c9900) on tqpair(0x2070720): expected_datao=0, payload_size=4096 00:29:42.988 [2024-10-13 01:40:28.363929] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.363945] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.363955] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.404565] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.988 [2024-10-13 01:40:28.404584] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.988 [2024-10-13 01:40:28.404592] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.988 [2024-10-13 01:40:28.404599] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9900) on tqpair=0x2070720 00:29:42.989 [2024-10-13 01:40:28.404616] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:42.989 [2024-10-13 01:40:28.404638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.404657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.404671] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.404678] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2070720) 00:29:42.989 [2024-10-13 01:40:28.404690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.989 [2024-10-13 01:40:28.404714] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9900, cid 4, qid 0 00:29:42.989 [2024-10-13 01:40:28.404831] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.989 [2024-10-13 01:40:28.404844] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.989 [2024-10-13 01:40:28.404850] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.404857] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2070720): datao=0, datal=4096, cccid=4 00:29:42.989 [2024-10-13 01:40:28.404864] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c9900) on tqpair(0x2070720): expected_datao=0, payload_size=4096 00:29:42.989 [2024-10-13 01:40:28.404872] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.404889] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.404903] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.446600] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.989 [2024-10-13 01:40:28.446619] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.989 [2024-10-13 01:40:28.446627] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.446634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9900) on tqpair=0x2070720 00:29:42.989 [2024-10-13 01:40:28.446657] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.446678] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.446692] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.446700] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2070720) 00:29:42.989 [2024-10-13 01:40:28.446711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.989 [2024-10-13 01:40:28.446735] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9900, cid 4, qid 0 00:29:42.989 [2024-10-13 01:40:28.446840] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.989 [2024-10-13 01:40:28.446855] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.989 [2024-10-13 01:40:28.446862] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.446868] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2070720): datao=0, datal=4096, cccid=4 00:29:42.989 [2024-10-13 01:40:28.446876] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c9900) on tqpair(0x2070720): expected_datao=0, payload_size=4096 00:29:42.989 [2024-10-13 01:40:28.446883] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.446893] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.446901] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.446912] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.989 [2024-10-13 01:40:28.446922] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.989 [2024-10-13 01:40:28.446928] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.446935] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9900) on tqpair=0x2070720 00:29:42.989 [2024-10-13 01:40:28.446948] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.446963] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.446979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.446991] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.447000] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.447008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.447017] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:42.989 [2024-10-13 01:40:28.447025] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:42.989 [2024-10-13 01:40:28.447033] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:42.989 [2024-10-13 01:40:28.447056] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447065] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2070720) 00:29:42.989 [2024-10-13 01:40:28.447076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.989 [2024-10-13 01:40:28.447087] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447094] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447100] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2070720) 00:29:42.989 [2024-10-13 01:40:28.447109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:42.989 [2024-10-13 01:40:28.447131] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9900, cid 4, qid 0 00:29:42.989 [2024-10-13 01:40:28.447159] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9a80, cid 5, qid 0 00:29:42.989 [2024-10-13 01:40:28.447353] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.989 [2024-10-13 01:40:28.447368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.989 [2024-10-13 01:40:28.447375] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447381] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9900) on tqpair=0x2070720 00:29:42.989 [2024-10-13 01:40:28.447391] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.989 [2024-10-13 01:40:28.447400] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.989 [2024-10-13 01:40:28.447407] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447413] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9a80) on tqpair=0x2070720 00:29:42.989 [2024-10-13 01:40:28.447429] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447438] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2070720) 00:29:42.989 [2024-10-13 01:40:28.447449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.989 [2024-10-13 01:40:28.447477] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9a80, cid 5, qid 0 00:29:42.989 [2024-10-13 01:40:28.447562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.989 [2024-10-13 01:40:28.447576] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.989 [2024-10-13 01:40:28.447583] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447589] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9a80) on tqpair=0x2070720 00:29:42.989 [2024-10-13 01:40:28.447605] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447614] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2070720) 00:29:42.989 [2024-10-13 01:40:28.447625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.989 [2024-10-13 01:40:28.447646] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9a80, cid 5, qid 0 00:29:42.989 [2024-10-13 01:40:28.447742] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.989 [2024-10-13 01:40:28.447754] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.989 [2024-10-13 01:40:28.447761] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447768] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9a80) on tqpair=0x2070720 00:29:42.989 [2024-10-13 01:40:28.447783] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447792] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2070720) 00:29:42.989 [2024-10-13 01:40:28.447803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.989 [2024-10-13 01:40:28.447827] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9a80, cid 5, qid 0 00:29:42.989 [2024-10-13 01:40:28.447920] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.989 [2024-10-13 01:40:28.447934] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.989 [2024-10-13 01:40:28.447941] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447948] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9a80) on tqpair=0x2070720 00:29:42.989 [2024-10-13 01:40:28.447972] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.989 [2024-10-13 01:40:28.447984] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2070720) 00:29:42.989 [2024-10-13 01:40:28.447994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.990 [2024-10-13 01:40:28.448007] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.448014] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2070720) 00:29:42.990 [2024-10-13 01:40:28.448024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.990 [2024-10-13 01:40:28.448035] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.448043] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2070720) 00:29:42.990 [2024-10-13 01:40:28.448052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.990 [2024-10-13 01:40:28.448067] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.448076] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2070720) 00:29:42.990 [2024-10-13 01:40:28.448086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.990 [2024-10-13 01:40:28.448108] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9a80, cid 5, qid 0 00:29:42.990 [2024-10-13 01:40:28.448119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9900, cid 4, qid 0 00:29:42.990 [2024-10-13 01:40:28.448127] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9c00, cid 6, qid 0 00:29:42.990 [2024-10-13 01:40:28.448135] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9d80, cid 7, qid 0 00:29:42.990 [2024-10-13 01:40:28.448340] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.990 [2024-10-13 01:40:28.448355] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.990 [2024-10-13 01:40:28.448361] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.448367] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2070720): datao=0, datal=8192, cccid=5 00:29:42.990 [2024-10-13 01:40:28.448375] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c9a80) on tqpair(0x2070720): expected_datao=0, payload_size=8192 00:29:42.990 [2024-10-13 01:40:28.448382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.448400] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.448410] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.448422] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.990 [2024-10-13 01:40:28.448432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.990 [2024-10-13 01:40:28.448439] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.448445] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2070720): datao=0, datal=512, cccid=4 00:29:42.990 [2024-10-13 01:40:28.448456] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c9900) on tqpair(0x2070720): expected_datao=0, payload_size=512 00:29:42.990 [2024-10-13 01:40:28.448464] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452484] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452497] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452506] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.990 [2024-10-13 01:40:28.452515] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.990 [2024-10-13 01:40:28.452521] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452527] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2070720): datao=0, datal=512, cccid=6 00:29:42.990 [2024-10-13 01:40:28.452534] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c9c00) on tqpair(0x2070720): expected_datao=0, payload_size=512 00:29:42.990 [2024-10-13 01:40:28.452541] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452550] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452556] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452565] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:42.990 [2024-10-13 01:40:28.452573] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:42.990 [2024-10-13 01:40:28.452579] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452585] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2070720): datao=0, datal=4096, cccid=7 00:29:42.990 [2024-10-13 01:40:28.452592] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20c9d80) on tqpair(0x2070720): expected_datao=0, payload_size=4096 00:29:42.990 [2024-10-13 01:40:28.452599] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452608] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452615] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452627] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.990 [2024-10-13 01:40:28.452636] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.990 [2024-10-13 01:40:28.452643] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452649] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9a80) on tqpair=0x2070720 00:29:42.990 [2024-10-13 01:40:28.452668] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.990 [2024-10-13 01:40:28.452679] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.990 [2024-10-13 01:40:28.452685] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452691] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9900) on tqpair=0x2070720 00:29:42.990 [2024-10-13 01:40:28.452706] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.990 [2024-10-13 01:40:28.452716] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.990 [2024-10-13 01:40:28.452722] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452729] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9c00) on tqpair=0x2070720 00:29:42.990 [2024-10-13 01:40:28.452739] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.990 [2024-10-13 01:40:28.452748] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.990 [2024-10-13 01:40:28.452768] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.990 [2024-10-13 01:40:28.452774] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9d80) on tqpair=0x2070720 00:29:42.990 ===================================================== 00:29:42.990 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.990 ===================================================== 00:29:42.990 Controller Capabilities/Features 00:29:42.990 ================================ 00:29:42.990 Vendor ID: 8086 00:29:42.990 Subsystem Vendor ID: 8086 00:29:42.990 Serial Number: SPDK00000000000001 00:29:42.990 Model Number: SPDK bdev Controller 00:29:42.990 Firmware Version: 25.01 00:29:42.990 Recommended Arb Burst: 6 00:29:42.990 IEEE OUI Identifier: e4 d2 5c 00:29:42.990 Multi-path I/O 00:29:42.990 May have multiple subsystem ports: Yes 00:29:42.990 May have multiple controllers: Yes 00:29:42.990 Associated with SR-IOV VF: No 00:29:42.990 Max Data Transfer Size: 131072 00:29:42.990 Max Number of Namespaces: 32 00:29:42.990 Max Number of I/O Queues: 127 00:29:42.990 NVMe Specification Version (VS): 1.3 00:29:42.990 NVMe Specification Version (Identify): 1.3 00:29:42.990 Maximum Queue Entries: 128 00:29:42.990 Contiguous Queues Required: Yes 00:29:42.990 Arbitration Mechanisms Supported 00:29:42.990 Weighted Round Robin: Not Supported 00:29:42.990 Vendor Specific: Not Supported 00:29:42.990 Reset Timeout: 15000 ms 00:29:42.990 Doorbell Stride: 4 bytes 00:29:42.990 NVM Subsystem Reset: Not Supported 00:29:42.990 Command Sets Supported 00:29:42.990 NVM Command Set: Supported 00:29:42.990 Boot Partition: Not Supported 00:29:42.990 Memory Page Size Minimum: 4096 bytes 00:29:42.990 Memory Page Size Maximum: 4096 bytes 00:29:42.990 Persistent Memory Region: Not Supported 00:29:42.990 Optional Asynchronous Events Supported 00:29:42.990 Namespace Attribute Notices: Supported 00:29:42.990 Firmware Activation Notices: Not Supported 00:29:42.990 ANA Change Notices: Not Supported 00:29:42.990 PLE Aggregate Log Change Notices: Not Supported 00:29:42.990 LBA Status Info Alert Notices: Not Supported 00:29:42.990 EGE Aggregate Log Change Notices: Not Supported 00:29:42.990 Normal NVM Subsystem Shutdown event: Not Supported 00:29:42.990 Zone Descriptor Change Notices: Not Supported 00:29:42.990 Discovery Log Change Notices: Not Supported 00:29:42.990 Controller Attributes 00:29:42.990 128-bit Host Identifier: Supported 00:29:42.990 Non-Operational Permissive Mode: Not Supported 00:29:42.990 NVM Sets: Not Supported 00:29:42.990 Read Recovery Levels: Not Supported 00:29:42.990 Endurance Groups: Not Supported 00:29:42.990 Predictable Latency Mode: Not Supported 00:29:42.990 Traffic Based Keep ALive: Not Supported 00:29:42.990 Namespace Granularity: Not Supported 00:29:42.990 SQ Associations: Not Supported 00:29:42.990 UUID List: Not Supported 00:29:42.990 Multi-Domain Subsystem: Not Supported 00:29:42.990 Fixed Capacity Management: Not Supported 00:29:42.990 Variable Capacity Management: Not Supported 00:29:42.990 Delete Endurance Group: Not Supported 00:29:42.990 Delete NVM Set: Not Supported 00:29:42.990 Extended LBA Formats Supported: Not Supported 00:29:42.990 Flexible Data Placement Supported: Not Supported 00:29:42.990 00:29:42.990 Controller Memory Buffer Support 00:29:42.990 ================================ 00:29:42.990 Supported: No 00:29:42.990 00:29:42.990 Persistent Memory Region Support 00:29:42.990 ================================ 00:29:42.990 Supported: No 00:29:42.990 00:29:42.990 Admin Command Set Attributes 00:29:42.990 ============================ 00:29:42.990 Security Send/Receive: Not Supported 00:29:42.990 Format NVM: Not Supported 00:29:42.990 Firmware Activate/Download: Not Supported 00:29:42.990 Namespace Management: Not Supported 00:29:42.990 Device Self-Test: Not Supported 00:29:42.990 Directives: Not Supported 00:29:42.990 NVMe-MI: Not Supported 00:29:42.991 Virtualization Management: Not Supported 00:29:42.991 Doorbell Buffer Config: Not Supported 00:29:42.991 Get LBA Status Capability: Not Supported 00:29:42.991 Command & Feature Lockdown Capability: Not Supported 00:29:42.991 Abort Command Limit: 4 00:29:42.991 Async Event Request Limit: 4 00:29:42.991 Number of Firmware Slots: N/A 00:29:42.991 Firmware Slot 1 Read-Only: N/A 00:29:42.991 Firmware Activation Without Reset: N/A 00:29:42.991 Multiple Update Detection Support: N/A 00:29:42.991 Firmware Update Granularity: No Information Provided 00:29:42.991 Per-Namespace SMART Log: No 00:29:42.991 Asymmetric Namespace Access Log Page: Not Supported 00:29:42.991 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:42.991 Command Effects Log Page: Supported 00:29:42.991 Get Log Page Extended Data: Supported 00:29:42.991 Telemetry Log Pages: Not Supported 00:29:42.991 Persistent Event Log Pages: Not Supported 00:29:42.991 Supported Log Pages Log Page: May Support 00:29:42.991 Commands Supported & Effects Log Page: Not Supported 00:29:42.991 Feature Identifiers & Effects Log Page:May Support 00:29:42.991 NVMe-MI Commands & Effects Log Page: May Support 00:29:42.991 Data Area 4 for Telemetry Log: Not Supported 00:29:42.991 Error Log Page Entries Supported: 128 00:29:42.991 Keep Alive: Supported 00:29:42.991 Keep Alive Granularity: 10000 ms 00:29:42.991 00:29:42.991 NVM Command Set Attributes 00:29:42.991 ========================== 00:29:42.991 Submission Queue Entry Size 00:29:42.991 Max: 64 00:29:42.991 Min: 64 00:29:42.991 Completion Queue Entry Size 00:29:42.991 Max: 16 00:29:42.991 Min: 16 00:29:42.991 Number of Namespaces: 32 00:29:42.991 Compare Command: Supported 00:29:42.991 Write Uncorrectable Command: Not Supported 00:29:42.991 Dataset Management Command: Supported 00:29:42.991 Write Zeroes Command: Supported 00:29:42.991 Set Features Save Field: Not Supported 00:29:42.991 Reservations: Supported 00:29:42.991 Timestamp: Not Supported 00:29:42.991 Copy: Supported 00:29:42.991 Volatile Write Cache: Present 00:29:42.991 Atomic Write Unit (Normal): 1 00:29:42.991 Atomic Write Unit (PFail): 1 00:29:42.991 Atomic Compare & Write Unit: 1 00:29:42.991 Fused Compare & Write: Supported 00:29:42.991 Scatter-Gather List 00:29:42.991 SGL Command Set: Supported 00:29:42.991 SGL Keyed: Supported 00:29:42.991 SGL Bit Bucket Descriptor: Not Supported 00:29:42.991 SGL Metadata Pointer: Not Supported 00:29:42.991 Oversized SGL: Not Supported 00:29:42.991 SGL Metadata Address: Not Supported 00:29:42.991 SGL Offset: Supported 00:29:42.991 Transport SGL Data Block: Not Supported 00:29:42.991 Replay Protected Memory Block: Not Supported 00:29:42.991 00:29:42.991 Firmware Slot Information 00:29:42.991 ========================= 00:29:42.991 Active slot: 1 00:29:42.991 Slot 1 Firmware Revision: 25.01 00:29:42.991 00:29:42.991 00:29:42.991 Commands Supported and Effects 00:29:42.991 ============================== 00:29:42.991 Admin Commands 00:29:42.991 -------------- 00:29:42.991 Get Log Page (02h): Supported 00:29:42.991 Identify (06h): Supported 00:29:42.991 Abort (08h): Supported 00:29:42.991 Set Features (09h): Supported 00:29:42.991 Get Features (0Ah): Supported 00:29:42.991 Asynchronous Event Request (0Ch): Supported 00:29:42.991 Keep Alive (18h): Supported 00:29:42.991 I/O Commands 00:29:42.991 ------------ 00:29:42.991 Flush (00h): Supported LBA-Change 00:29:42.991 Write (01h): Supported LBA-Change 00:29:42.991 Read (02h): Supported 00:29:42.991 Compare (05h): Supported 00:29:42.991 Write Zeroes (08h): Supported LBA-Change 00:29:42.991 Dataset Management (09h): Supported LBA-Change 00:29:42.991 Copy (19h): Supported LBA-Change 00:29:42.991 00:29:42.991 Error Log 00:29:42.991 ========= 00:29:42.991 00:29:42.991 Arbitration 00:29:42.991 =========== 00:29:42.991 Arbitration Burst: 1 00:29:42.991 00:29:42.991 Power Management 00:29:42.991 ================ 00:29:42.991 Number of Power States: 1 00:29:42.991 Current Power State: Power State #0 00:29:42.991 Power State #0: 00:29:42.991 Max Power: 0.00 W 00:29:42.991 Non-Operational State: Operational 00:29:42.991 Entry Latency: Not Reported 00:29:42.991 Exit Latency: Not Reported 00:29:42.991 Relative Read Throughput: 0 00:29:42.991 Relative Read Latency: 0 00:29:42.991 Relative Write Throughput: 0 00:29:42.991 Relative Write Latency: 0 00:29:42.991 Idle Power: Not Reported 00:29:42.991 Active Power: Not Reported 00:29:42.991 Non-Operational Permissive Mode: Not Supported 00:29:42.991 00:29:42.991 Health Information 00:29:42.991 ================== 00:29:42.991 Critical Warnings: 00:29:42.991 Available Spare Space: OK 00:29:42.991 Temperature: OK 00:29:42.991 Device Reliability: OK 00:29:42.991 Read Only: No 00:29:42.991 Volatile Memory Backup: OK 00:29:42.991 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:42.991 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:42.991 Available Spare: 0% 00:29:42.991 Available Spare Threshold: 0% 00:29:42.991 Life Percentage Used:[2024-10-13 01:40:28.452883] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.991 [2024-10-13 01:40:28.452894] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2070720) 00:29:42.991 [2024-10-13 01:40:28.452904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.991 [2024-10-13 01:40:28.452929] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9d80, cid 7, qid 0 00:29:42.991 [2024-10-13 01:40:28.453142] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.991 [2024-10-13 01:40:28.453155] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.991 [2024-10-13 01:40:28.453162] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.991 [2024-10-13 01:40:28.453168] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9d80) on tqpair=0x2070720 00:29:42.991 [2024-10-13 01:40:28.453213] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:42.991 [2024-10-13 01:40:28.453233] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9300) on tqpair=0x2070720 00:29:42.991 [2024-10-13 01:40:28.453244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.991 [2024-10-13 01:40:28.453253] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9480) on tqpair=0x2070720 00:29:42.991 [2024-10-13 01:40:28.453261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.991 [2024-10-13 01:40:28.453269] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9600) on tqpair=0x2070720 00:29:42.991 [2024-10-13 01:40:28.453276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.991 [2024-10-13 01:40:28.453284] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.991 [2024-10-13 01:40:28.453292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:42.991 [2024-10-13 01:40:28.453304] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.991 [2024-10-13 01:40:28.453327] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.991 [2024-10-13 01:40:28.453333] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.991 [2024-10-13 01:40:28.453344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.991 [2024-10-13 01:40:28.453367] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.991 [2024-10-13 01:40:28.453498] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.991 [2024-10-13 01:40:28.453512] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.453519] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.453526] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.453537] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.453544] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.453551] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.453561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.453588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.453697] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.453711] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.453718] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.453724] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.453732] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:42.992 [2024-10-13 01:40:28.453739] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:42.992 [2024-10-13 01:40:28.453760] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.453769] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.453776] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.453786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.453807] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.453892] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.453904] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.453911] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.453918] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.453934] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.453943] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.453949] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.453959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.453980] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.454073] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.454085] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.454091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454098] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.454114] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454123] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454129] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.454139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.454159] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.454252] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.454264] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.454271] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454277] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.454293] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454301] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454308] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.454318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.454338] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.454422] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.454435] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.454442] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454448] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.454465] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454487] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454495] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.454506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.454527] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.454631] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.454645] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.454651] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454658] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.454674] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454683] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454689] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.454700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.454720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.454849] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.454863] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.454870] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.454892] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454901] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.454907] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.454918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.454938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.455050] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.455062] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.455069] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.455091] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455100] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455106] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.455116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.455136] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.455215] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.455227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.455234] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.455256] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455265] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455275] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.455286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.455306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.455434] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.455447] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.455454] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455461] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.455485] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455496] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455502] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.992 [2024-10-13 01:40:28.455512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.992 [2024-10-13 01:40:28.455534] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.992 [2024-10-13 01:40:28.455637] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.992 [2024-10-13 01:40:28.455649] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.992 [2024-10-13 01:40:28.455655] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455662] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.992 [2024-10-13 01:40:28.455677] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455686] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.992 [2024-10-13 01:40:28.455693] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.993 [2024-10-13 01:40:28.455703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.993 [2024-10-13 01:40:28.455723] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.993 [2024-10-13 01:40:28.455851] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.993 [2024-10-13 01:40:28.455865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.993 [2024-10-13 01:40:28.455872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.455879] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.993 [2024-10-13 01:40:28.455895] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.455904] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.455910] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.993 [2024-10-13 01:40:28.455920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.993 [2024-10-13 01:40:28.455940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.993 [2024-10-13 01:40:28.456029] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.993 [2024-10-13 01:40:28.456043] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.993 [2024-10-13 01:40:28.456050] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.456056] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.993 [2024-10-13 01:40:28.456073] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.456082] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.456088] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.993 [2024-10-13 01:40:28.456102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.993 [2024-10-13 01:40:28.456123] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.993 [2024-10-13 01:40:28.456205] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.993 [2024-10-13 01:40:28.456217] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.993 [2024-10-13 01:40:28.456223] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.456230] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.993 [2024-10-13 01:40:28.456245] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.456254] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.456261] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.993 [2024-10-13 01:40:28.456271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.993 [2024-10-13 01:40:28.456291] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.993 [2024-10-13 01:40:28.456420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.993 [2024-10-13 01:40:28.456432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.993 [2024-10-13 01:40:28.456439] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.456445] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.993 [2024-10-13 01:40:28.456461] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.460478] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.460491] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2070720) 00:29:42.993 [2024-10-13 01:40:28.460502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.993 [2024-10-13 01:40:28.460524] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20c9780, cid 3, qid 0 00:29:42.993 [2024-10-13 01:40:28.460671] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:42.993 [2024-10-13 01:40:28.460685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:42.993 [2024-10-13 01:40:28.460692] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:42.993 [2024-10-13 01:40:28.460699] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20c9780) on tqpair=0x2070720 00:29:42.993 [2024-10-13 01:40:28.460711] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:29:42.993 0% 00:29:42.993 Data Units Read: 0 00:29:42.993 Data Units Written: 0 00:29:42.993 Host Read Commands: 0 00:29:42.993 Host Write Commands: 0 00:29:42.993 Controller Busy Time: 0 minutes 00:29:42.993 Power Cycles: 0 00:29:42.993 Power On Hours: 0 hours 00:29:42.993 Unsafe Shutdowns: 0 00:29:42.993 Unrecoverable Media Errors: 0 00:29:42.993 Lifetime Error Log Entries: 0 00:29:42.993 Warning Temperature Time: 0 minutes 00:29:42.993 Critical Temperature Time: 0 minutes 00:29:42.993 00:29:42.993 Number of Queues 00:29:42.993 ================ 00:29:42.993 Number of I/O Submission Queues: 127 00:29:42.993 Number of I/O Completion Queues: 127 00:29:42.993 00:29:42.993 Active Namespaces 00:29:42.993 ================= 00:29:42.993 Namespace ID:1 00:29:42.993 Error Recovery Timeout: Unlimited 00:29:42.993 Command Set Identifier: NVM (00h) 00:29:42.993 Deallocate: Supported 00:29:42.993 Deallocated/Unwritten Error: Not Supported 00:29:42.993 Deallocated Read Value: Unknown 00:29:42.993 Deallocate in Write Zeroes: Not Supported 00:29:42.993 Deallocated Guard Field: 0xFFFF 00:29:42.993 Flush: Supported 00:29:42.993 Reservation: Supported 00:29:42.993 Namespace Sharing Capabilities: Multiple Controllers 00:29:42.993 Size (in LBAs): 131072 (0GiB) 00:29:42.993 Capacity (in LBAs): 131072 (0GiB) 00:29:42.993 Utilization (in LBAs): 131072 (0GiB) 00:29:42.993 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:42.993 EUI64: ABCDEF0123456789 00:29:42.993 UUID: 2acb38d3-f0f4-4fba-89e9-d53484d03896 00:29:42.993 Thin Provisioning: Not Supported 00:29:42.993 Per-NS Atomic Units: Yes 00:29:42.993 Atomic Boundary Size (Normal): 0 00:29:42.993 Atomic Boundary Size (PFail): 0 00:29:42.993 Atomic Boundary Offset: 0 00:29:42.993 Maximum Single Source Range Length: 65535 00:29:42.993 Maximum Copy Length: 65535 00:29:42.993 Maximum Source Range Count: 1 00:29:42.993 NGUID/EUI64 Never Reused: No 00:29:42.993 Namespace Write Protected: No 00:29:42.993 Number of LBA Formats: 1 00:29:42.993 Current LBA Format: LBA Format #00 00:29:42.993 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:42.993 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.993 rmmod nvme_tcp 00:29:42.993 rmmod nvme_fabrics 00:29:42.993 rmmod nvme_keyring 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1699567 ']' 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1699567 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1699567 ']' 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1699567 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:42.993 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1699567 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1699567' 00:29:43.252 killing process with pid 1699567 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1699567 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1699567 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.252 01:40:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.782 00:29:45.782 real 0m5.836s 00:29:45.782 user 0m5.073s 00:29:45.782 sys 0m2.026s 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:45.782 ************************************ 00:29:45.782 END TEST nvmf_identify 00:29:45.782 ************************************ 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.782 ************************************ 00:29:45.782 START TEST nvmf_perf 00:29:45.782 ************************************ 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:45.782 * Looking for test storage... 00:29:45.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:45.782 01:40:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:45.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.782 --rc genhtml_branch_coverage=1 00:29:45.782 --rc genhtml_function_coverage=1 00:29:45.782 --rc genhtml_legend=1 00:29:45.782 --rc geninfo_all_blocks=1 00:29:45.782 --rc geninfo_unexecuted_blocks=1 00:29:45.782 00:29:45.782 ' 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:45.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.782 --rc genhtml_branch_coverage=1 00:29:45.782 --rc genhtml_function_coverage=1 00:29:45.782 --rc genhtml_legend=1 00:29:45.782 --rc geninfo_all_blocks=1 00:29:45.782 --rc geninfo_unexecuted_blocks=1 00:29:45.782 00:29:45.782 ' 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:45.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.782 --rc genhtml_branch_coverage=1 00:29:45.782 --rc genhtml_function_coverage=1 00:29:45.782 --rc genhtml_legend=1 00:29:45.782 --rc geninfo_all_blocks=1 00:29:45.782 --rc geninfo_unexecuted_blocks=1 00:29:45.782 00:29:45.782 ' 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:45.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.782 --rc genhtml_branch_coverage=1 00:29:45.782 --rc genhtml_function_coverage=1 00:29:45.782 --rc genhtml_legend=1 00:29:45.782 --rc geninfo_all_blocks=1 00:29:45.782 --rc geninfo_unexecuted_blocks=1 00:29:45.782 00:29:45.782 ' 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.782 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:45.783 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.783 01:40:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:47.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:47.685 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:47.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:47.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.685 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.686 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:29:47.944 00:29:47.944 --- 10.0.0.2 ping statistics --- 00:29:47.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.944 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:29:47.944 00:29:47.944 --- 10.0.0.1 ping statistics --- 00:29:47.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.944 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1701650 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1701650 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1701650 ']' 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:47.944 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:47.944 [2024-10-13 01:40:33.355867] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:29:47.944 [2024-10-13 01:40:33.355951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.944 [2024-10-13 01:40:33.421658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.944 [2024-10-13 01:40:33.472667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.944 [2024-10-13 01:40:33.472722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.945 [2024-10-13 01:40:33.472738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.945 [2024-10-13 01:40:33.472751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.945 [2024-10-13 01:40:33.472769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.945 [2024-10-13 01:40:33.474410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.945 [2024-10-13 01:40:33.474505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.945 [2024-10-13 01:40:33.474545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.945 [2024-10-13 01:40:33.474549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.203 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:48.203 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:29:48.203 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:48.203 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.203 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:48.203 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.203 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:48.203 01:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:51.482 01:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:51.482 01:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:51.482 01:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:29:51.482 01:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:51.740 01:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:51.740 01:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:29:51.740 01:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:51.740 01:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:51.740 01:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:51.998 [2024-10-13 01:40:37.538831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.998 01:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.563 01:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:52.563 01:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:52.563 01:40:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:52.563 01:40:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:52.820 01:40:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.078 [2024-10-13 01:40:38.646852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.349 01:40:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.612 01:40:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:29:53.612 01:40:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:53.612 01:40:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:53.612 01:40:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:29:54.989 Initializing NVMe Controllers 00:29:54.989 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:29:54.989 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:29:54.989 Initialization complete. Launching workers. 00:29:54.989 ======================================================== 00:29:54.989 Latency(us) 00:29:54.989 Device Information : IOPS MiB/s Average min max 00:29:54.989 PCIE (0000:88:00.0) NSID 1 from core 0: 84818.04 331.32 376.68 34.90 8290.21 00:29:54.989 ======================================================== 00:29:54.989 Total : 84818.04 331.32 376.68 34.90 8290.21 00:29:54.989 00:29:54.989 01:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:56.360 Initializing NVMe Controllers 00:29:56.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:56.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:56.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:56.360 Initialization complete. Launching workers. 00:29:56.360 ======================================================== 00:29:56.360 Latency(us) 00:29:56.360 Device Information : IOPS MiB/s Average min max 00:29:56.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 70.00 0.27 14493.91 139.09 44802.82 00:29:56.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24499.18 5991.53 47911.94 00:29:56.360 ======================================================== 00:29:56.360 Total : 111.00 0.43 18189.55 139.09 47911.94 00:29:56.360 00:29:56.360 01:40:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:57.735 Initializing NVMe Controllers 00:29:57.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:57.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:57.735 Initialization complete. Launching workers. 00:29:57.735 ======================================================== 00:29:57.735 Latency(us) 00:29:57.735 Device Information : IOPS MiB/s Average min max 00:29:57.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8413.00 32.86 3815.30 655.15 7567.88 00:29:57.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3926.00 15.34 8188.63 4828.83 15900.49 00:29:57.735 ======================================================== 00:29:57.735 Total : 12339.00 48.20 5206.80 655.15 15900.49 00:29:57.735 00:29:57.735 01:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:57.735 01:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:57.735 01:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:00.265 Initializing NVMe Controllers 00:30:00.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.265 Controller IO queue size 128, less than required. 00:30:00.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.265 Controller IO queue size 128, less than required. 00:30:00.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:00.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:00.265 Initialization complete. Launching workers. 00:30:00.265 ======================================================== 00:30:00.265 Latency(us) 00:30:00.265 Device Information : IOPS MiB/s Average min max 00:30:00.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1750.12 437.53 74656.61 54881.52 121588.59 00:30:00.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 563.57 140.89 236155.72 117637.70 378586.84 00:30:00.265 ======================================================== 00:30:00.265 Total : 2313.69 578.42 113994.80 54881.52 378586.84 00:30:00.265 00:30:00.265 01:40:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:00.265 No valid NVMe controllers or AIO or URING devices found 00:30:00.265 Initializing NVMe Controllers 00:30:00.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:00.265 Controller IO queue size 128, less than required. 00:30:00.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.265 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:00.265 Controller IO queue size 128, less than required. 00:30:00.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.265 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:00.265 WARNING: Some requested NVMe devices were skipped 00:30:00.265 01:40:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:02.849 Initializing NVMe Controllers 00:30:02.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.849 Controller IO queue size 128, less than required. 00:30:02.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.849 Controller IO queue size 128, less than required. 00:30:02.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:02.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:02.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:02.849 Initialization complete. Launching workers. 00:30:02.849 00:30:02.849 ==================== 00:30:02.849 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:02.849 TCP transport: 00:30:02.849 polls: 6829 00:30:02.849 idle_polls: 3425 00:30:02.849 sock_completions: 3404 00:30:02.849 nvme_completions: 6117 00:30:02.849 submitted_requests: 9130 00:30:02.849 queued_requests: 1 00:30:02.849 00:30:02.849 ==================== 00:30:02.849 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:02.849 TCP transport: 00:30:02.849 polls: 12550 00:30:02.849 idle_polls: 9238 00:30:02.849 sock_completions: 3312 00:30:02.849 nvme_completions: 5891 00:30:02.850 submitted_requests: 8774 00:30:02.850 queued_requests: 1 00:30:02.850 ======================================================== 00:30:02.850 Latency(us) 00:30:02.850 Device Information : IOPS MiB/s Average min max 00:30:02.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1528.80 382.20 86244.51 53808.56 129777.02 00:30:02.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1472.30 368.08 87604.81 42588.80 136989.80 00:30:02.850 ======================================================== 00:30:02.850 Total : 3001.10 750.28 86911.86 42588.80 136989.80 00:30:02.850 00:30:02.850 01:40:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:02.850 01:40:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:02.850 01:40:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:02.850 01:40:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:02.850 01:40:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:06.153 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=245e4e81-b2b1-42f9-abcd-d1bf40d771a0 00:30:06.153 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 245e4e81-b2b1-42f9-abcd-d1bf40d771a0 00:30:06.153 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=245e4e81-b2b1-42f9-abcd-d1bf40d771a0 00:30:06.153 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:06.153 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:06.153 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:06.153 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:06.410 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:06.410 { 00:30:06.410 "uuid": "245e4e81-b2b1-42f9-abcd-d1bf40d771a0", 00:30:06.410 "name": "lvs_0", 00:30:06.410 "base_bdev": "Nvme0n1", 00:30:06.410 "total_data_clusters": 238234, 00:30:06.410 "free_clusters": 238234, 00:30:06.410 "block_size": 512, 00:30:06.410 "cluster_size": 4194304 00:30:06.410 } 00:30:06.410 ]' 00:30:06.410 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="245e4e81-b2b1-42f9-abcd-d1bf40d771a0") .free_clusters' 00:30:06.668 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:06.668 01:40:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="245e4e81-b2b1-42f9-abcd-d1bf40d771a0") .cluster_size' 00:30:06.668 01:40:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:06.668 01:40:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:06.668 01:40:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:06.668 952936 00:30:06.668 01:40:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:06.668 01:40:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:06.668 01:40:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 245e4e81-b2b1-42f9-abcd-d1bf40d771a0 lbd_0 20480 00:30:07.232 01:40:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=28b46926-cd46-4d08-a10b-baf82a891460 00:30:07.233 01:40:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 28b46926-cd46-4d08-a10b-baf82a891460 lvs_n_0 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=94c95065-4769-4031-aff2-2bcfc0529a76 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 94c95065-4769-4031-aff2-2bcfc0529a76 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=94c95065-4769-4031-aff2-2bcfc0529a76 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:08.165 { 00:30:08.165 "uuid": "245e4e81-b2b1-42f9-abcd-d1bf40d771a0", 00:30:08.165 "name": "lvs_0", 00:30:08.165 "base_bdev": "Nvme0n1", 00:30:08.165 "total_data_clusters": 238234, 00:30:08.165 "free_clusters": 233114, 00:30:08.165 "block_size": 512, 00:30:08.165 "cluster_size": 4194304 00:30:08.165 }, 00:30:08.165 { 00:30:08.165 "uuid": "94c95065-4769-4031-aff2-2bcfc0529a76", 00:30:08.165 "name": "lvs_n_0", 00:30:08.165 "base_bdev": "28b46926-cd46-4d08-a10b-baf82a891460", 00:30:08.165 "total_data_clusters": 5114, 00:30:08.165 "free_clusters": 5114, 00:30:08.165 "block_size": 512, 00:30:08.165 "cluster_size": 4194304 00:30:08.165 } 00:30:08.165 ]' 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="94c95065-4769-4031-aff2-2bcfc0529a76") .free_clusters' 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:08.165 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="94c95065-4769-4031-aff2-2bcfc0529a76") .cluster_size' 00:30:08.423 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:08.423 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:08.423 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:08.423 20456 00:30:08.423 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:08.423 01:40:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94c95065-4769-4031-aff2-2bcfc0529a76 lbd_nest_0 20456 00:30:08.680 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=5867126b-dac4-44ac-bbdf-ba14bcb03598 00:30:08.681 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:08.938 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:08.938 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5867126b-dac4-44ac-bbdf-ba14bcb03598 00:30:09.196 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:09.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:09.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:09.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:09.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.659 Initializing NVMe Controllers 00:30:21.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:21.659 Initialization complete. Launching workers. 00:30:21.659 ======================================================== 00:30:21.659 Latency(us) 00:30:21.659 Device Information : IOPS MiB/s Average min max 00:30:21.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.10 0.02 21246.95 179.94 45896.73 00:30:21.659 ======================================================== 00:30:21.659 Total : 47.10 0.02 21246.95 179.94 45896.73 00:30:21.659 00:30:21.659 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:21.659 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.619 Initializing NVMe Controllers 00:30:31.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.619 Initialization complete. Launching workers. 00:30:31.619 ======================================================== 00:30:31.619 Latency(us) 00:30:31.619 Device Information : IOPS MiB/s Average min max 00:30:31.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.10 10.01 12493.19 5060.32 50831.83 00:30:31.619 ======================================================== 00:30:31.619 Total : 80.10 10.01 12493.19 5060.32 50831.83 00:30:31.619 00:30:31.619 01:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:31.619 01:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:31.619 01:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.583 Initializing NVMe Controllers 00:30:41.583 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.583 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.583 Initialization complete. Launching workers. 00:30:41.583 ======================================================== 00:30:41.583 Latency(us) 00:30:41.583 Device Information : IOPS MiB/s Average min max 00:30:41.583 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7484.72 3.65 4282.04 286.10 47879.40 00:30:41.584 ======================================================== 00:30:41.584 Total : 7484.72 3.65 4282.04 286.10 47879.40 00:30:41.584 00:30:41.584 01:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:41.584 01:41:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.551 Initializing NVMe Controllers 00:30:51.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:51.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:51.551 Initialization complete. Launching workers. 00:30:51.551 ======================================================== 00:30:51.551 Latency(us) 00:30:51.551 Device Information : IOPS MiB/s Average min max 00:30:51.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3425.70 428.21 9341.27 1009.26 19328.06 00:30:51.551 ======================================================== 00:30:51.551 Total : 3425.70 428.21 9341.27 1009.26 19328.06 00:30:51.551 00:30:51.551 01:41:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:51.551 01:41:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:51.551 01:41:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.511 Initializing NVMe Controllers 00:31:01.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:01.511 Controller IO queue size 128, less than required. 00:31:01.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:01.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:01.511 Initialization complete. Launching workers. 00:31:01.511 ======================================================== 00:31:01.511 Latency(us) 00:31:01.511 Device Information : IOPS MiB/s Average min max 00:31:01.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11686.34 5.71 10956.15 1755.74 24259.37 00:31:01.511 ======================================================== 00:31:01.511 Total : 11686.34 5.71 10956.15 1755.74 24259.37 00:31:01.511 00:31:01.511 01:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:01.512 01:41:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:11.476 Initializing NVMe Controllers 00:31:11.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:11.476 Controller IO queue size 128, less than required. 00:31:11.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:11.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:11.476 Initialization complete. Launching workers. 00:31:11.476 ======================================================== 00:31:11.476 Latency(us) 00:31:11.476 Device Information : IOPS MiB/s Average min max 00:31:11.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1177.80 147.23 109100.10 23497.03 230776.49 00:31:11.476 ======================================================== 00:31:11.476 Total : 1177.80 147.23 109100.10 23497.03 230776.49 00:31:11.476 00:31:11.476 01:41:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:11.734 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5867126b-dac4-44ac-bbdf-ba14bcb03598 00:31:12.665 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:12.922 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 28b46926-cd46-4d08-a10b-baf82a891460 00:31:13.180 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.438 rmmod nvme_tcp 00:31:13.438 rmmod nvme_fabrics 00:31:13.438 rmmod nvme_keyring 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1701650 ']' 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1701650 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1701650 ']' 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1701650 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1701650 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1701650' 00:31:13.438 killing process with pid 1701650 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1701650 00:31:13.438 01:41:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1701650 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:15.337 01:42:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:17.239 00:31:17.239 real 1m31.710s 00:31:17.239 user 5m38.500s 00:31:17.239 sys 0m15.500s 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:17.239 ************************************ 00:31:17.239 END TEST nvmf_perf 00:31:17.239 ************************************ 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:17.239 ************************************ 00:31:17.239 START TEST nvmf_fio_host 00:31:17.239 ************************************ 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:17.239 * Looking for test storage... 00:31:17.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:17.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.239 --rc genhtml_branch_coverage=1 00:31:17.239 --rc genhtml_function_coverage=1 00:31:17.239 --rc genhtml_legend=1 00:31:17.239 --rc geninfo_all_blocks=1 00:31:17.239 --rc geninfo_unexecuted_blocks=1 00:31:17.239 00:31:17.239 ' 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:17.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.239 --rc genhtml_branch_coverage=1 00:31:17.239 --rc genhtml_function_coverage=1 00:31:17.239 --rc genhtml_legend=1 00:31:17.239 --rc geninfo_all_blocks=1 00:31:17.239 --rc geninfo_unexecuted_blocks=1 00:31:17.239 00:31:17.239 ' 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:17.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.239 --rc genhtml_branch_coverage=1 00:31:17.239 --rc genhtml_function_coverage=1 00:31:17.239 --rc genhtml_legend=1 00:31:17.239 --rc geninfo_all_blocks=1 00:31:17.239 --rc geninfo_unexecuted_blocks=1 00:31:17.239 00:31:17.239 ' 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:17.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:17.239 --rc genhtml_branch_coverage=1 00:31:17.239 --rc genhtml_function_coverage=1 00:31:17.239 --rc genhtml_legend=1 00:31:17.239 --rc geninfo_all_blocks=1 00:31:17.239 --rc geninfo_unexecuted_blocks=1 00:31:17.239 00:31:17.239 ' 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.239 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:17.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:17.240 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:17.499 01:42:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:19.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:19.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.478 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:19.479 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:19.479 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:19.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:19.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:31:19.479 00:31:19.479 --- 10.0.0.2 ping statistics --- 00:31:19.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.479 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:31:19.479 01:42:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:19.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:19.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:31:19.479 00:31:19.479 --- 10.0.0.1 ping statistics --- 00:31:19.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:19.479 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1713747 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1713747 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1713747 ']' 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:19.479 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:19.738 [2024-10-13 01:42:05.080356] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:31:19.738 [2024-10-13 01:42:05.080446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:19.738 [2024-10-13 01:42:05.144263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.738 [2024-10-13 01:42:05.192595] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.738 [2024-10-13 01:42:05.192652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.738 [2024-10-13 01:42:05.192682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.738 [2024-10-13 01:42:05.192693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.738 [2024-10-13 01:42:05.192704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.738 [2024-10-13 01:42:05.194386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.738 [2024-10-13 01:42:05.194550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.738 [2024-10-13 01:42:05.194579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.738 [2024-10-13 01:42:05.194583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.995 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:19.995 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:31:19.995 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:20.253 [2024-10-13 01:42:05.620962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.253 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:20.253 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:20.253 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:20.253 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:20.511 Malloc1 00:31:20.511 01:42:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:20.771 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:21.029 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:21.287 [2024-10-13 01:42:06.821190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:21.287 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:21.545 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:21.802 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:21.802 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:21.802 fio-3.35 00:31:21.802 Starting 1 thread 00:31:24.330 00:31:24.330 test: (groupid=0, jobs=1): err= 0: pid=1714322: Sun Oct 13 01:42:09 2024 00:31:24.330 read: IOPS=8771, BW=34.3MiB/s (35.9MB/s)(68.8MiB/2007msec) 00:31:24.330 slat (nsec): min=1929, max=300239, avg=2668.74, stdev=2858.46 00:31:24.330 clat (usec): min=2531, max=14394, avg=7927.12, stdev=657.52 00:31:24.330 lat (usec): min=2557, max=14396, avg=7929.79, stdev=657.40 00:31:24.330 clat percentiles (usec): 00:31:24.330 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7439], 00:31:24.330 | 30.00th=[ 7635], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8094], 00:31:24.330 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:31:24.330 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[12125], 99.95th=[13566], 00:31:24.330 | 99.99th=[14353] 00:31:24.330 bw ( KiB/s): min=33656, max=35712, per=100.00%, avg=35084.00, stdev=969.26, samples=4 00:31:24.330 iops : min= 8414, max= 8928, avg=8771.00, stdev=242.31, samples=4 00:31:24.330 write: IOPS=8778, BW=34.3MiB/s (36.0MB/s)(68.8MiB/2007msec); 0 zone resets 00:31:24.330 slat (usec): min=2, max=125, avg= 2.77, stdev= 1.64 00:31:24.330 clat (usec): min=1408, max=12933, avg=6559.75, stdev=557.20 00:31:24.330 lat (usec): min=1416, max=12936, avg=6562.52, stdev=557.16 00:31:24.330 clat percentiles (usec): 00:31:24.330 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:31:24.330 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6652], 00:31:24.330 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:31:24.330 | 99.00th=[ 7701], 99.50th=[ 7898], 99.90th=[11207], 99.95th=[11863], 00:31:24.330 | 99.99th=[12911] 00:31:24.330 bw ( KiB/s): min=34512, max=35520, per=99.98%, avg=35108.00, stdev=473.62, samples=4 00:31:24.330 iops : min= 8628, max= 8880, avg=8777.00, stdev=118.41, samples=4 00:31:24.330 lat (msec) : 2=0.03%, 4=0.08%, 10=99.69%, 20=0.20% 00:31:24.330 cpu : usr=64.71%, sys=33.45%, ctx=86, majf=0, minf=41 00:31:24.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:24.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:24.330 issued rwts: total=17604,17619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.330 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:24.330 00:31:24.330 Run status group 0 (all jobs): 00:31:24.330 READ: bw=34.3MiB/s (35.9MB/s), 34.3MiB/s-34.3MiB/s (35.9MB/s-35.9MB/s), io=68.8MiB (72.1MB), run=2007-2007msec 00:31:24.330 WRITE: bw=34.3MiB/s (36.0MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=68.8MiB (72.2MB), run=2007-2007msec 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:24.330 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:24.588 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:24.588 fio-3.35 00:31:24.588 Starting 1 thread 00:31:27.118 00:31:27.118 test: (groupid=0, jobs=1): err= 0: pid=1715067: Sun Oct 13 01:42:12 2024 00:31:27.118 read: IOPS=8248, BW=129MiB/s (135MB/s)(259MiB/2010msec) 00:31:27.118 slat (nsec): min=2788, max=93632, avg=3817.75, stdev=1917.78 00:31:27.118 clat (usec): min=2461, max=16759, avg=8827.20, stdev=1926.12 00:31:27.118 lat (usec): min=2464, max=16764, avg=8831.02, stdev=1926.09 00:31:27.118 clat percentiles (usec): 00:31:27.118 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7242], 00:31:27.118 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[ 9241], 00:31:27.118 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11469], 95.00th=[12256], 00:31:27.119 | 99.00th=[13829], 99.50th=[14353], 99.90th=[15401], 99.95th=[15664], 00:31:27.119 | 99.99th=[15926] 00:31:27.119 bw ( KiB/s): min=59456, max=75424, per=51.20%, avg=67576.00, stdev=8024.60, samples=4 00:31:27.119 iops : min= 3716, max= 4714, avg=4223.50, stdev=501.54, samples=4 00:31:27.119 write: IOPS=4918, BW=76.8MiB/s (80.6MB/s)(139MiB/1807msec); 0 zone resets 00:31:27.119 slat (usec): min=30, max=127, avg=34.37, stdev= 5.82 00:31:27.119 clat (usec): min=3381, max=19524, avg=11784.78, stdev=2014.72 00:31:27.119 lat (usec): min=3412, max=19559, avg=11819.15, stdev=2014.80 00:31:27.119 clat percentiles (usec): 00:31:27.119 | 1.00th=[ 7635], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:31:27.119 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11600], 60.00th=[12125], 00:31:27.119 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14484], 95.00th=[15401], 00:31:27.119 | 99.00th=[16909], 99.50th=[17433], 99.90th=[19006], 99.95th=[19268], 00:31:27.119 | 99.99th=[19530] 00:31:27.119 bw ( KiB/s): min=63520, max=78400, per=89.52%, avg=70440.00, stdev=7967.95, samples=4 00:31:27.119 iops : min= 3970, max= 4900, avg=4402.50, stdev=498.00, samples=4 00:31:27.119 lat (msec) : 4=0.18%, 10=56.12%, 20=43.70% 00:31:27.119 cpu : usr=77.30%, sys=21.35%, ctx=49, majf=0, minf=71 00:31:27.119 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:27.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:27.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:27.119 issued rwts: total=16580,8887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:27.119 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:27.119 00:31:27.119 Run status group 0 (all jobs): 00:31:27.119 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2010-2010msec 00:31:27.119 WRITE: bw=76.8MiB/s (80.6MB/s), 76.8MiB/s-76.8MiB/s (80.6MB/s-80.6MB/s), io=139MiB (146MB), run=1807-1807msec 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:31:27.119 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:31:30.399 Nvme0n1 00:31:30.399 01:42:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:33.679 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=12a9e325-3fa2-4436-98c8-53c6ef1274cc 00:31:33.679 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 12a9e325-3fa2-4436-98c8-53c6ef1274cc 00:31:33.679 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=12a9e325-3fa2-4436-98c8-53c6ef1274cc 00:31:33.679 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:33.679 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:33.679 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:33.680 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:33.680 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:33.680 { 00:31:33.680 "uuid": "12a9e325-3fa2-4436-98c8-53c6ef1274cc", 00:31:33.680 "name": "lvs_0", 00:31:33.680 "base_bdev": "Nvme0n1", 00:31:33.680 "total_data_clusters": 930, 00:31:33.680 "free_clusters": 930, 00:31:33.680 "block_size": 512, 00:31:33.680 "cluster_size": 1073741824 00:31:33.680 } 00:31:33.680 ]' 00:31:33.680 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="12a9e325-3fa2-4436-98c8-53c6ef1274cc") .free_clusters' 00:31:33.680 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:31:33.680 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="12a9e325-3fa2-4436-98c8-53c6ef1274cc") .cluster_size' 00:31:33.680 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:31:33.680 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:31:33.680 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:31:33.680 952320 00:31:33.680 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:33.937 f1541931-3809-4de4-bd9b-75cf4a482b6d 00:31:33.937 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:34.195 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:34.453 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:34.710 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.710 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.710 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:34.968 01:42:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:34.968 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:34.968 fio-3.35 00:31:34.968 Starting 1 thread 00:31:37.495 00:31:37.495 test: (groupid=0, jobs=1): err= 0: pid=1716345: Sun Oct 13 01:42:23 2024 00:31:37.495 read: IOPS=5372, BW=21.0MiB/s (22.0MB/s)(43.0MiB/2049msec) 00:31:37.495 slat (usec): min=2, max=167, avg= 2.80, stdev= 2.62 00:31:37.495 clat (usec): min=1018, max=171723, avg=13083.00, stdev=12649.25 00:31:37.495 lat (usec): min=1021, max=171765, avg=13085.80, stdev=12649.58 00:31:37.495 clat percentiles (msec): 00:31:37.495 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:31:37.495 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:31:37.495 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:31:37.495 | 99.00th=[ 56], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:31:37.495 | 99.99th=[ 171] 00:31:37.495 bw ( KiB/s): min=15352, max=24032, per=100.00%, avg=21852.00, stdev=4333.34, samples=4 00:31:37.495 iops : min= 3838, max= 6008, avg=5463.00, stdev=1083.34, samples=4 00:31:37.495 write: IOPS=5341, BW=20.9MiB/s (21.9MB/s)(42.8MiB/2049msec); 0 zone resets 00:31:37.495 slat (usec): min=2, max=144, avg= 2.88, stdev= 2.15 00:31:37.495 clat (usec): min=349, max=169628, avg=10610.80, stdev=11702.88 00:31:37.495 lat (usec): min=352, max=169635, avg=10613.68, stdev=11703.24 00:31:37.495 clat percentiles (msec): 00:31:37.495 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:31:37.495 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:31:37.495 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 12], 00:31:37.495 | 99.00th=[ 13], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:31:37.495 | 99.99th=[ 169] 00:31:37.495 bw ( KiB/s): min=16296, max=23808, per=100.00%, avg=21802.00, stdev=3673.02, samples=4 00:31:37.495 iops : min= 4074, max= 5952, avg=5450.50, stdev=918.26, samples=4 00:31:37.495 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:37.495 lat (msec) : 2=0.03%, 4=0.10%, 10=35.37%, 20=63.32%, 50=0.07% 00:31:37.495 lat (msec) : 100=0.51%, 250=0.58% 00:31:37.495 cpu : usr=53.76%, sys=44.73%, ctx=98, majf=0, minf=41 00:31:37.495 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:37.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:37.495 issued rwts: total=11009,10945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:37.495 00:31:37.495 Run status group 0 (all jobs): 00:31:37.495 READ: bw=21.0MiB/s (22.0MB/s), 21.0MiB/s-21.0MiB/s (22.0MB/s-22.0MB/s), io=43.0MiB (45.1MB), run=2049-2049msec 00:31:37.495 WRITE: bw=20.9MiB/s (21.9MB/s), 20.9MiB/s-20.9MiB/s (21.9MB/s-21.9MB/s), io=42.8MiB (44.8MB), run=2049-2049msec 00:31:37.753 01:42:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:38.011 01:42:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:38.943 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=af798041-a685-45cc-a1eb-1211bf94e453 00:31:38.943 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb af798041-a685-45cc-a1eb-1211bf94e453 00:31:38.943 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=af798041-a685-45cc-a1eb-1211bf94e453 00:31:38.943 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:31:38.943 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:31:38.943 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:31:38.943 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:39.508 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:31:39.508 { 00:31:39.508 "uuid": "12a9e325-3fa2-4436-98c8-53c6ef1274cc", 00:31:39.508 "name": "lvs_0", 00:31:39.508 "base_bdev": "Nvme0n1", 00:31:39.508 "total_data_clusters": 930, 00:31:39.508 "free_clusters": 0, 00:31:39.508 "block_size": 512, 00:31:39.508 "cluster_size": 1073741824 00:31:39.508 }, 00:31:39.508 { 00:31:39.508 "uuid": "af798041-a685-45cc-a1eb-1211bf94e453", 00:31:39.508 "name": "lvs_n_0", 00:31:39.508 "base_bdev": "f1541931-3809-4de4-bd9b-75cf4a482b6d", 00:31:39.508 "total_data_clusters": 237847, 00:31:39.508 "free_clusters": 237847, 00:31:39.508 "block_size": 512, 00:31:39.508 "cluster_size": 4194304 00:31:39.508 } 00:31:39.508 ]' 00:31:39.509 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="af798041-a685-45cc-a1eb-1211bf94e453") .free_clusters' 00:31:39.509 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:31:39.509 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="af798041-a685-45cc-a1eb-1211bf94e453") .cluster_size' 00:31:39.509 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:31:39.509 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:31:39.509 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:31:39.509 951388 00:31:39.509 01:42:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:40.074 e49f28a9-8c84-47ae-8aa9-6c8c4637dc4c 00:31:40.074 01:42:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:40.332 01:42:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:40.589 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:40.847 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:40.848 01:42:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.106 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:41.106 fio-3.35 00:31:41.106 Starting 1 thread 00:31:43.642 00:31:43.642 test: (groupid=0, jobs=1): err= 0: pid=1717082: Sun Oct 13 01:42:28 2024 00:31:43.642 read: IOPS=5702, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2011msec) 00:31:43.642 slat (usec): min=2, max=153, avg= 2.64, stdev= 2.07 00:31:43.642 clat (usec): min=4582, max=19841, avg=12230.15, stdev=1110.22 00:31:43.642 lat (usec): min=4610, max=19843, avg=12232.79, stdev=1110.11 00:31:43.642 clat percentiles (usec): 00:31:43.642 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10945], 20.00th=[11338], 00:31:43.642 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:31:43.642 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13566], 95.00th=[14091], 00:31:43.642 | 99.00th=[14746], 99.50th=[15008], 99.90th=[18220], 99.95th=[18220], 00:31:43.642 | 99.99th=[18482] 00:31:43.642 bw ( KiB/s): min=21320, max=23552, per=100.00%, avg=22818.00, stdev=1024.42, samples=4 00:31:43.642 iops : min= 5330, max= 5888, avg=5704.50, stdev=256.11, samples=4 00:31:43.642 write: IOPS=5684, BW=22.2MiB/s (23.3MB/s)(44.7MiB/2011msec); 0 zone resets 00:31:43.642 slat (usec): min=2, max=119, avg= 2.74, stdev= 1.59 00:31:43.642 clat (usec): min=2204, max=18611, avg=10031.76, stdev=961.54 00:31:43.642 lat (usec): min=2211, max=18614, avg=10034.50, stdev=961.51 00:31:43.642 clat percentiles (usec): 00:31:43.642 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9372], 00:31:43.642 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:31:43.642 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:31:43.642 | 99.00th=[12125], 99.50th=[12518], 99.90th=[17957], 99.95th=[18482], 00:31:43.642 | 99.99th=[18482] 00:31:43.642 bw ( KiB/s): min=22360, max=22976, per=99.94%, avg=22726.00, stdev=280.45, samples=4 00:31:43.642 iops : min= 5590, max= 5744, avg=5681.50, stdev=70.11, samples=4 00:31:43.642 lat (msec) : 4=0.05%, 10=25.38%, 20=74.58% 00:31:43.642 cpu : usr=62.49%, sys=36.07%, ctx=108, majf=0, minf=41 00:31:43.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:31:43.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:43.642 issued rwts: total=11468,11432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:43.642 00:31:43.642 Run status group 0 (all jobs): 00:31:43.642 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2011-2011msec 00:31:43.642 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.7MiB (46.8MB), run=2011-2011msec 00:31:43.642 01:42:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:43.900 01:42:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:43.900 01:42:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:48.078 01:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:48.078 01:42:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:51.357 01:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:51.357 01:42:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:53.255 rmmod nvme_tcp 00:31:53.255 rmmod nvme_fabrics 00:31:53.255 rmmod nvme_keyring 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1713747 ']' 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1713747 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1713747 ']' 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1713747 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1713747 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1713747' 00:31:53.255 killing process with pid 1713747 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1713747 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1713747 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.255 01:42:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.787 00:31:55.787 real 0m38.183s 00:31:55.787 user 2m27.072s 00:31:55.787 sys 0m7.141s 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.787 ************************************ 00:31:55.787 END TEST nvmf_fio_host 00:31:55.787 ************************************ 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.787 ************************************ 00:31:55.787 START TEST nvmf_failover 00:31:55.787 ************************************ 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:55.787 * Looking for test storage... 00:31:55.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:31:55.787 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:55.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.787 --rc genhtml_branch_coverage=1 00:31:55.787 --rc genhtml_function_coverage=1 00:31:55.787 --rc genhtml_legend=1 00:31:55.787 --rc geninfo_all_blocks=1 00:31:55.787 --rc geninfo_unexecuted_blocks=1 00:31:55.787 00:31:55.787 ' 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:55.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.787 --rc genhtml_branch_coverage=1 00:31:55.787 --rc genhtml_function_coverage=1 00:31:55.787 --rc genhtml_legend=1 00:31:55.787 --rc geninfo_all_blocks=1 00:31:55.787 --rc geninfo_unexecuted_blocks=1 00:31:55.787 00:31:55.787 ' 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:55.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.787 --rc genhtml_branch_coverage=1 00:31:55.787 --rc genhtml_function_coverage=1 00:31:55.787 --rc genhtml_legend=1 00:31:55.787 --rc geninfo_all_blocks=1 00:31:55.787 --rc geninfo_unexecuted_blocks=1 00:31:55.787 00:31:55.787 ' 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:55.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.787 --rc genhtml_branch_coverage=1 00:31:55.787 --rc genhtml_function_coverage=1 00:31:55.787 --rc genhtml_legend=1 00:31:55.787 --rc geninfo_all_blocks=1 00:31:55.787 --rc geninfo_unexecuted_blocks=1 00:31:55.787 00:31:55.787 ' 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.787 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:55.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:55.788 01:42:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:57.689 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:57.689 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:57.689 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:57.689 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:57.690 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.690 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:57.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:31:57.690 00:31:57.690 --- 10.0.0.2 ping statistics --- 00:31:57.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.690 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:31:57.690 00:31:57.690 --- 10.0.0.1 ping statistics --- 00:31:57.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.690 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1720457 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1720457 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1720457 ']' 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:57.690 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:57.690 [2024-10-13 01:42:43.195876] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:31:57.690 [2024-10-13 01:42:43.195953] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.690 [2024-10-13 01:42:43.261655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:57.976 [2024-10-13 01:42:43.308998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.976 [2024-10-13 01:42:43.309051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.976 [2024-10-13 01:42:43.309079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.976 [2024-10-13 01:42:43.309092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.976 [2024-10-13 01:42:43.309109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.976 [2024-10-13 01:42:43.310589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.976 [2024-10-13 01:42:43.310642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:57.976 [2024-10-13 01:42:43.310646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.976 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:57.976 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:31:57.976 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:57.976 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:57.976 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:57.976 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:57.976 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:58.263 [2024-10-13 01:42:43.752754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.263 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:58.829 Malloc0 00:31:58.829 01:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:59.086 01:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:59.344 01:42:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.602 [2024-10-13 01:42:45.043238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.602 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:59.860 [2024-10-13 01:42:45.332284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:59.860 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:00.118 [2024-10-13 01:42:45.597103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1720755 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1720755 /var/tmp/bdevperf.sock 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1720755 ']' 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:00.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:00.118 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.376 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:00.376 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:00.376 01:42:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:00.945 NVMe0n1 00:32:00.945 01:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:01.202 00:32:01.202 01:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1720887 00:32:01.202 01:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:01.202 01:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:02.136 01:42:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:02.394 [2024-10-13 01:42:47.956746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0120 is same with the state(6) to be set 00:32:02.394 [2024-10-13 01:42:47.956864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0120 is same with the state(6) to be set 00:32:02.394 [2024-10-13 01:42:47.956881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0120 is same with the state(6) to be set 00:32:02.394 [2024-10-13 01:42:47.956895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0120 is same with the state(6) to be set 00:32:02.394 [2024-10-13 01:42:47.956908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0120 is same with the state(6) to be set 00:32:02.394 [2024-10-13 01:42:47.956921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0120 is same with the state(6) to be set 00:32:02.394 [2024-10-13 01:42:47.956933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0120 is same with the state(6) to be set 00:32:02.394 [2024-10-13 01:42:47.956946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f0120 is same with the state(6) to be set 00:32:02.651 01:42:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:05.930 01:42:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:05.930 00:32:06.189 01:42:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:06.189 [2024-10-13 01:42:51.765029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.189 [2024-10-13 01:42:51.765561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22f1660 is same with the state(6) to be set 00:32:06.447 01:42:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:09.730 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.730 [2024-10-13 01:42:55.096104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.730 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:10.663 01:42:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:10.921 01:42:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1720887 00:32:17.483 { 00:32:17.483 "results": [ 00:32:17.483 { 00:32:17.483 "job": "NVMe0n1", 00:32:17.483 "core_mask": "0x1", 00:32:17.483 "workload": "verify", 00:32:17.483 "status": "finished", 00:32:17.483 "verify_range": { 00:32:17.483 "start": 0, 00:32:17.483 "length": 16384 00:32:17.483 }, 00:32:17.483 "queue_depth": 128, 00:32:17.483 "io_size": 4096, 00:32:17.483 "runtime": 15.00447, 00:32:17.483 "iops": 8338.515122493496, 00:32:17.483 "mibps": 32.57232469724022, 00:32:17.483 "io_failed": 8821, 00:32:17.483 "io_timeout": 0, 00:32:17.483 "avg_latency_us": 14308.143801572447, 00:32:17.483 "min_latency_us": 555.2355555555556, 00:32:17.483 "max_latency_us": 22330.785185185185 00:32:17.483 } 00:32:17.483 ], 00:32:17.483 "core_count": 1 00:32:17.483 } 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1720755 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1720755 ']' 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1720755 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1720755 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1720755' 00:32:17.483 killing process with pid 1720755 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1720755 00:32:17.483 01:43:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1720755 00:32:17.483 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:17.483 [2024-10-13 01:42:45.662427] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:32:17.483 [2024-10-13 01:42:45.662534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720755 ] 00:32:17.483 [2024-10-13 01:42:45.721900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.483 [2024-10-13 01:42:45.769323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.483 Running I/O for 15 seconds... 00:32:17.483 7982.00 IOPS, 31.18 MiB/s [2024-10-12T23:43:03.061Z] [2024-10-13 01:42:47.959303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.483 [2024-10-13 01:42:47.959637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.483 [2024-10-13 01:42:47.959665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.483 [2024-10-13 01:42:47.959696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.483 [2024-10-13 01:42:47.959711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.959984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.959998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.484 [2024-10-13 01:42:47.960816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.484 [2024-10-13 01:42:47.960830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.960844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.960857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.960872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.960886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.960901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.960914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.960929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.960942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.960957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.960970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.960986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.485 [2024-10-13 01:42:47.961861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.485 [2024-10-13 01:42:47.961877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.961890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.961905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.961918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.961933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.961946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.961961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.961974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.961989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.486 [2024-10-13 01:42:47.962371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72808 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72816 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72824 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72832 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72840 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72848 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72856 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72864 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72872 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72880 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72888 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.962960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72896 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.962973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.962985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.962999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.963011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72904 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.963023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.963036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.486 [2024-10-13 01:42:47.963046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.486 [2024-10-13 01:42:47.963057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72912 len:8 PRP1 0x0 PRP2 0x0 00:32:17.486 [2024-10-13 01:42:47.963069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.486 [2024-10-13 01:42:47.963082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.487 [2024-10-13 01:42:47.963092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.487 [2024-10-13 01:42:47.963103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72920 len:8 PRP1 0x0 PRP2 0x0 00:32:17.487 [2024-10-13 01:42:47.963115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.487 [2024-10-13 01:42:47.963137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.487 [2024-10-13 01:42:47.963148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72928 len:8 PRP1 0x0 PRP2 0x0 00:32:17.487 [2024-10-13 01:42:47.963160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.487 [2024-10-13 01:42:47.963183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.487 [2024-10-13 01:42:47.963194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72936 len:8 PRP1 0x0 PRP2 0x0 00:32:17.487 [2024-10-13 01:42:47.963206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.487 [2024-10-13 01:42:47.963228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.487 [2024-10-13 01:42:47.963238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72944 len:8 PRP1 0x0 PRP2 0x0 00:32:17.487 [2024-10-13 01:42:47.963250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.487 [2024-10-13 01:42:47.963273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.487 [2024-10-13 01:42:47.963284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72952 len:8 PRP1 0x0 PRP2 0x0 00:32:17.487 [2024-10-13 01:42:47.963296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.487 [2024-10-13 01:42:47.963319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.487 [2024-10-13 01:42:47.963330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72960 len:8 PRP1 0x0 PRP2 0x0 00:32:17.487 [2024-10-13 01:42:47.963342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.487 [2024-10-13 01:42:47.963369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.487 [2024-10-13 01:42:47.963379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72968 len:8 PRP1 0x0 PRP2 0x0 00:32:17.487 [2024-10-13 01:42:47.963392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.487 [2024-10-13 01:42:47.963415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.487 [2024-10-13 01:42:47.963425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72976 len:8 PRP1 0x0 PRP2 0x0 00:32:17.487 [2024-10-13 01:42:47.963437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.487 [2024-10-13 01:42:47.963460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.487 [2024-10-13 01:42:47.963479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72984 len:8 PRP1 0x0 PRP2 0x0 00:32:17.487 [2024-10-13 01:42:47.963493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963551] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1db61e0 was disconnected and freed. reset controller. 00:32:17.487 [2024-10-13 01:42:47.963570] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:17.487 [2024-10-13 01:42:47.963605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.487 [2024-10-13 01:42:47.963624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.487 [2024-10-13 01:42:47.963667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.487 [2024-10-13 01:42:47.963695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.487 [2024-10-13 01:42:47.963721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:47.963734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.487 [2024-10-13 01:42:47.963782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d99470 (9): Bad file descriptor 00:32:17.487 [2024-10-13 01:42:47.966993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.487 [2024-10-13 01:42:48.086605] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:17.487 7626.00 IOPS, 29.79 MiB/s [2024-10-12T23:43:03.065Z] 7977.00 IOPS, 31.16 MiB/s [2024-10-12T23:43:03.065Z] 8137.75 IOPS, 31.79 MiB/s [2024-10-12T23:43:03.065Z] [2024-10-13 01:42:51.767223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.487 [2024-10-13 01:42:51.767645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.487 [2024-10-13 01:42:51.767660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.767976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.767990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.488 [2024-10-13 01:42:51.768697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.488 [2024-10-13 01:42:51.768711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.768739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.768769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.768801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.768834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.768862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.768890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.768925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.768953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.768982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.768995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.489 [2024-10-13 01:42:51.769686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.489 [2024-10-13 01:42:51.769714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.489 [2024-10-13 01:42:51.769741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.489 [2024-10-13 01:42:51.769769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.489 [2024-10-13 01:42:51.769796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.489 [2024-10-13 01:42:51.769825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.489 [2024-10-13 01:42:51.769853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.489 [2024-10-13 01:42:51.769872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.490 [2024-10-13 01:42:51.769886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.769901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.769914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.769929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.769942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.769957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.769970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.769985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.769998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.770026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.770054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.770081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.770109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.770137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.770166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.490 [2024-10-13 01:42:51.770195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.770243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99688 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.770261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.490 [2024-10-13 01:42:51.770329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.490 [2024-10-13 01:42:51.770358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.490 [2024-10-13 01:42:51.770384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.490 [2024-10-13 01:42:51.770411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d99470 is same with the state(6) to be set 00:32:17.490 [2024-10-13 01:42:51.770673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.770694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.770707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99696 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.770720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.770748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.770760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99704 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.770773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.770796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.770808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99712 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.770820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.770844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.770854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99720 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.770867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.770890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.770901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99728 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.770918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.770943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.770954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99736 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.770966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.770980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.770991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.771002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99744 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.771013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.771026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.771036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.771047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99752 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.771059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.771072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.771083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.771093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99760 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.771105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.771118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.771129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.771139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99768 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.771152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.771164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.771175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.771186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99776 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.771198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.771211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.771222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.771232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99784 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.771245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.771257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.771268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.490 [2024-10-13 01:42:51.771283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99792 len:8 PRP1 0x0 PRP2 0x0 00:32:17.490 [2024-10-13 01:42:51.771295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.490 [2024-10-13 01:42:51.771309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.490 [2024-10-13 01:42:51.771320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99800 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99808 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99816 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99824 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99832 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98872 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98880 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98888 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98896 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98904 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98912 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98920 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98928 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.771961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98936 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.771974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.771986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.771997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98944 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.772048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98952 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.772095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98960 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.772146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98968 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.772193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98976 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.772239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98984 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.772285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98992 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.772332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99000 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.772380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99008 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.491 [2024-10-13 01:42:51.772431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.491 [2024-10-13 01:42:51.772443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99016 len:8 PRP1 0x0 PRP2 0x0 00:32:17.491 [2024-10-13 01:42:51.772461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.491 [2024-10-13 01:42:51.772481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99024 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99032 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99040 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99048 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99056 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99064 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99072 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99080 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99088 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99096 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.772961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.772972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.772982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99104 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.772994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.773007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.773017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.773028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99112 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.773040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.773052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.773063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.773073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99120 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.773086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.773098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.773108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.773119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99128 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.773131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.773147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.773158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.773169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99136 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.773181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.773193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.773204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.773215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99144 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.773227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.773239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.773250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.773261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99152 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.773273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.773285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.773296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.773307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99160 len:8 PRP1 0x0 PRP2 0x0 00:32:17.492 [2024-10-13 01:42:51.773319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.492 [2024-10-13 01:42:51.773331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.492 [2024-10-13 01:42:51.773341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.492 [2024-10-13 01:42:51.773352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99168 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99176 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99184 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99192 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99208 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773644] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99216 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99224 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99232 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99240 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99248 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99256 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99264 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.773958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.773971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.773983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.773994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99272 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.774006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.774019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.774029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.774040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99280 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.774052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.774064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.774075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.774085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99288 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.774098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.780332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.780360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.780373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99296 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.780387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.780401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.780412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.780422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.780435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.780447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.780457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.780468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99312 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.780490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.780509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.780521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.780532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99320 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.780544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.780557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.780568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.780578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99328 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.780590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.780603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.780614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.780625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99336 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.780637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.780649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.493 [2024-10-13 01:42:51.780660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.493 [2024-10-13 01:42:51.780670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99344 len:8 PRP1 0x0 PRP2 0x0 00:32:17.493 [2024-10-13 01:42:51.780682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.493 [2024-10-13 01:42:51.780694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.780706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.780717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99352 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.780729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.780742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.780752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.780763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99360 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.780775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.780788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.780798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.780808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99368 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.780821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.780833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.780843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.780854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99376 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.780866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.780882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.780893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.780904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99384 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.780916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.780928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.780939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.780949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99392 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.780961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.780974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.780984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.780994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99400 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99408 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99416 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99424 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99432 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99440 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99448 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99456 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99464 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99472 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99480 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99488 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99496 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99504 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99512 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99520 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99528 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.494 [2024-10-13 01:42:51.781782] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.494 [2024-10-13 01:42:51.781793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:32:17.494 [2024-10-13 01:42:51.781805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.494 [2024-10-13 01:42:51.781818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.781828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.781839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99544 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.781851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.781864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.781875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.781885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99552 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.781897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.781909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.781920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.781930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99560 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.781942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.781959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.781969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.781980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99568 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.781993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99576 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99584 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99592 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98816 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98824 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98832 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98840 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98848 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98856 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98864 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99600 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99608 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99616 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99624 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99632 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99640 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99648 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99656 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99664 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99672 len:8 PRP1 0x0 PRP2 0x0 00:32:17.495 [2024-10-13 01:42:51.782942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.495 [2024-10-13 01:42:51.782964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.495 [2024-10-13 01:42:51.782979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.495 [2024-10-13 01:42:51.782991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99680 len:8 PRP1 0x0 PRP2 0x0 00:32:17.496 [2024-10-13 01:42:51.783003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:51.783016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.496 [2024-10-13 01:42:51.783026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.496 [2024-10-13 01:42:51.783037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99688 len:8 PRP1 0x0 PRP2 0x0 00:32:17.496 [2024-10-13 01:42:51.783050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:51.783110] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dd1510 was disconnected and freed. reset controller. 00:32:17.496 [2024-10-13 01:42:51.783133] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:17.496 [2024-10-13 01:42:51.783149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.496 [2024-10-13 01:42:51.783207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d99470 (9): Bad file descriptor 00:32:17.496 [2024-10-13 01:42:51.786490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.496 [2024-10-13 01:42:51.818793] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:17.496 8129.40 IOPS, 31.76 MiB/s [2024-10-12T23:43:03.074Z] 8186.67 IOPS, 31.98 MiB/s [2024-10-12T23:43:03.074Z] 8233.14 IOPS, 32.16 MiB/s [2024-10-12T23:43:03.074Z] 8269.50 IOPS, 32.30 MiB/s [2024-10-12T23:43:03.074Z] 8307.56 IOPS, 32.45 MiB/s [2024-10-12T23:43:03.074Z] [2024-10-13 01:42:56.362684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.362745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.362785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.362801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.362818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.362832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.362847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.362860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.362875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.362890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.362905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.362934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.362949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.362962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.362977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.362990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.496 [2024-10-13 01:42:56.363418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.496 [2024-10-13 01:42:56.363447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.496 [2024-10-13 01:42:56.363483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.496 [2024-10-13 01:42:56.363512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.496 [2024-10-13 01:42:56.363540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.496 [2024-10-13 01:42:56.363567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.496 [2024-10-13 01:42:56.363595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.496 [2024-10-13 01:42:56.363704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.496 [2024-10-13 01:42:56.363718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.363746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.363783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.363812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.363840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.363867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.363895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.363922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.363949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.363977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.363990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.364017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.364045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.364072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.364099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.364126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.364158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.497 [2024-10-13 01:42:56.364185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.497 [2024-10-13 01:42:56.364693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.497 [2024-10-13 01:42:56.364708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.498 [2024-10-13 01:42:56.364803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.364986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.364999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.498 [2024-10-13 01:42:56.365278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.498 [2024-10-13 01:42:56.365913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.498 [2024-10-13 01:42:56.365927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.365942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.365956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.365974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.365989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.499 [2024-10-13 01:42:56.366425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbb750 is same with the state(6) to be set 00:32:17.499 [2024-10-13 01:42:56.366467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.499 [2024-10-13 01:42:56.366486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.499 [2024-10-13 01:42:56.366498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:8 PRP1 0x0 PRP2 0x0 00:32:17.499 [2024-10-13 01:42:56.366511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366571] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dbb750 was disconnected and freed. reset controller. 00:32:17.499 [2024-10-13 01:42:56.366591] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:17.499 [2024-10-13 01:42:56.366626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.499 [2024-10-13 01:42:56.366645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.499 [2024-10-13 01:42:56.366672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.499 [2024-10-13 01:42:56.366698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.499 [2024-10-13 01:42:56.366723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.499 [2024-10-13 01:42:56.366736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:17.499 [2024-10-13 01:42:56.366796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d99470 (9): Bad file descriptor 00:32:17.499 [2024-10-13 01:42:56.370031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:17.499 [2024-10-13 01:42:56.446061] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:17.499 8250.90 IOPS, 32.23 MiB/s [2024-10-12T23:43:03.077Z] 8288.55 IOPS, 32.38 MiB/s [2024-10-12T23:43:03.077Z] 8309.83 IOPS, 32.46 MiB/s [2024-10-12T23:43:03.077Z] 8314.62 IOPS, 32.48 MiB/s [2024-10-12T23:43:03.077Z] 8330.00 IOPS, 32.54 MiB/s 00:32:17.499 Latency(us) 00:32:17.499 [2024-10-12T23:43:03.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.499 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:17.499 Verification LBA range: start 0x0 length 0x4000 00:32:17.499 NVMe0n1 : 15.00 8338.52 32.57 587.89 0.00 14308.14 555.24 22330.79 00:32:17.499 [2024-10-12T23:43:03.077Z] =================================================================================================================== 00:32:17.499 [2024-10-12T23:43:03.077Z] Total : 8338.52 32.57 587.89 0.00 14308.14 555.24 22330.79 00:32:17.499 Received shutdown signal, test time was about 15.000000 seconds 00:32:17.499 00:32:17.499 Latency(us) 00:32:17.499 [2024-10-12T23:43:03.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.499 [2024-10-12T23:43:03.077Z] =================================================================================================================== 00:32:17.499 [2024-10-12T23:43:03.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1722603 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1722603 /var/tmp/bdevperf.sock 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1722603 ']' 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:17.499 [2024-10-13 01:43:02.641672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:17.499 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:17.499 [2024-10-13 01:43:02.906338] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:17.500 01:43:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:18.065 NVMe0n1 00:32:18.065 01:43:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:18.323 00:32:18.323 01:43:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:18.888 00:32:18.888 01:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:18.888 01:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:19.146 01:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:19.404 01:43:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:22.683 01:43:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:22.683 01:43:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:22.683 01:43:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1723389 00:32:22.683 01:43:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:22.683 01:43:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1723389 00:32:24.056 { 00:32:24.056 "results": [ 00:32:24.056 { 00:32:24.056 "job": "NVMe0n1", 00:32:24.056 "core_mask": "0x1", 00:32:24.056 "workload": "verify", 00:32:24.056 "status": "finished", 00:32:24.056 "verify_range": { 00:32:24.057 "start": 0, 00:32:24.057 "length": 16384 00:32:24.057 }, 00:32:24.057 "queue_depth": 128, 00:32:24.057 "io_size": 4096, 00:32:24.057 "runtime": 1.009355, 00:32:24.057 "iops": 8526.237052375032, 00:32:24.057 "mibps": 33.30561348583997, 00:32:24.057 "io_failed": 0, 00:32:24.057 "io_timeout": 0, 00:32:24.057 "avg_latency_us": 14935.535293378434, 00:32:24.057 "min_latency_us": 2342.305185185185, 00:32:24.057 "max_latency_us": 16408.27259259259 00:32:24.057 } 00:32:24.057 ], 00:32:24.057 "core_count": 1 00:32:24.057 } 00:32:24.057 01:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:24.057 [2024-10-13 01:43:02.141541] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:32:24.057 [2024-10-13 01:43:02.141631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722603 ] 00:32:24.057 [2024-10-13 01:43:02.200481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.057 [2024-10-13 01:43:02.243843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:24.057 [2024-10-13 01:43:04.794412] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:24.057 [2024-10-13 01:43:04.794516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.057 [2024-10-13 01:43:04.794555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.057 [2024-10-13 01:43:04.794572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.057 [2024-10-13 01:43:04.794585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.057 [2024-10-13 01:43:04.794600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.057 [2024-10-13 01:43:04.794613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.057 [2024-10-13 01:43:04.794626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:24.057 [2024-10-13 01:43:04.794639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:24.057 [2024-10-13 01:43:04.794652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:32:24.057 [2024-10-13 01:43:04.794698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:24.057 [2024-10-13 01:43:04.794730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f2470 (9): Bad file descriptor 00:32:24.057 [2024-10-13 01:43:04.804074] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:24.057 Running I/O for 1 seconds... 00:32:24.057 8478.00 IOPS, 33.12 MiB/s 00:32:24.057 Latency(us) 00:32:24.057 [2024-10-12T23:43:09.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:24.057 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:24.057 Verification LBA range: start 0x0 length 0x4000 00:32:24.057 NVMe0n1 : 1.01 8526.24 33.31 0.00 0.00 14935.54 2342.31 16408.27 00:32:24.057 [2024-10-12T23:43:09.635Z] =================================================================================================================== 00:32:24.057 [2024-10-12T23:43:09.635Z] Total : 8526.24 33.31 0.00 0.00 14935.54 2342.31 16408.27 00:32:24.057 01:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:24.057 01:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:24.057 01:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.315 01:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:24.315 01:43:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:24.573 01:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.830 01:43:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1722603 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1722603 ']' 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1722603 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1722603 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1722603' 00:32:28.110 killing process with pid 1722603 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1722603 00:32:28.110 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1722603 00:32:28.368 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:28.368 01:43:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:28.625 rmmod nvme_tcp 00:32:28.625 rmmod nvme_fabrics 00:32:28.625 rmmod nvme_keyring 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1720457 ']' 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1720457 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1720457 ']' 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1720457 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1720457 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1720457' 00:32:28.625 killing process with pid 1720457 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1720457 00:32:28.625 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1720457 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.884 01:43:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:31.417 00:32:31.417 real 0m35.590s 00:32:31.417 user 2m6.356s 00:32:31.417 sys 0m5.657s 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:31.417 ************************************ 00:32:31.417 END TEST nvmf_failover 00:32:31.417 ************************************ 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.417 ************************************ 00:32:31.417 START TEST nvmf_host_discovery 00:32:31.417 ************************************ 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:31.417 * Looking for test storage... 00:32:31.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:31.417 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:31.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.418 --rc genhtml_branch_coverage=1 00:32:31.418 --rc genhtml_function_coverage=1 00:32:31.418 --rc genhtml_legend=1 00:32:31.418 --rc geninfo_all_blocks=1 00:32:31.418 --rc geninfo_unexecuted_blocks=1 00:32:31.418 00:32:31.418 ' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:31.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.418 --rc genhtml_branch_coverage=1 00:32:31.418 --rc genhtml_function_coverage=1 00:32:31.418 --rc genhtml_legend=1 00:32:31.418 --rc geninfo_all_blocks=1 00:32:31.418 --rc geninfo_unexecuted_blocks=1 00:32:31.418 00:32:31.418 ' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:31.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.418 --rc genhtml_branch_coverage=1 00:32:31.418 --rc genhtml_function_coverage=1 00:32:31.418 --rc genhtml_legend=1 00:32:31.418 --rc geninfo_all_blocks=1 00:32:31.418 --rc geninfo_unexecuted_blocks=1 00:32:31.418 00:32:31.418 ' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:31.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:31.418 --rc genhtml_branch_coverage=1 00:32:31.418 --rc genhtml_function_coverage=1 00:32:31.418 --rc genhtml_legend=1 00:32:31.418 --rc geninfo_all_blocks=1 00:32:31.418 --rc geninfo_unexecuted_blocks=1 00:32:31.418 00:32:31.418 ' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:31.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:31.418 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:33.318 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:33.318 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.318 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:33.319 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:33.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:33.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:33.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:32:33.319 00:32:33.319 --- 10.0.0.2 ping statistics --- 00:32:33.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.319 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:33.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:33.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:32:33.319 00:32:33.319 --- 10.0.0.1 ping statistics --- 00:32:33.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:33.319 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=1726004 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 1726004 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1726004 ']' 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:33.319 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.319 [2024-10-13 01:43:18.819200] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:32:33.319 [2024-10-13 01:43:18.819286] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:33.319 [2024-10-13 01:43:18.887040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.578 [2024-10-13 01:43:18.934189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:33.578 [2024-10-13 01:43:18.934261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:33.578 [2024-10-13 01:43:18.934277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:33.578 [2024-10-13 01:43:18.934290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:33.578 [2024-10-13 01:43:18.934302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:33.578 [2024-10-13 01:43:18.934934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 [2024-10-13 01:43:19.079618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 [2024-10-13 01:43:19.087842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 null0 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 null1 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1726023 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1726023 /tmp/host.sock 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1726023 ']' 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:33.578 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:33.578 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.836 [2024-10-13 01:43:19.164618] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:32:33.836 [2024-10-13 01:43:19.164697] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1726023 ] 00:32:33.836 [2024-10-13 01:43:19.228447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.836 [2024-10-13 01:43:19.277639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:33.836 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:33.837 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:33.837 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:33.837 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:33.837 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:33.837 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:33.837 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:33.837 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.096 [2024-10-13 01:43:19.673433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:34.354 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:32:34.355 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:34.921 [2024-10-13 01:43:20.416969] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:34.921 [2024-10-13 01:43:20.417007] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:34.921 [2024-10-13 01:43:20.417032] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:35.179 [2024-10-13 01:43:20.503299] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:35.179 [2024-10-13 01:43:20.601050] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:35.179 [2024-10-13 01:43:20.601075] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.438 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.438 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:35.438 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:35.438 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:35.438 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.438 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:35.438 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:35.697 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.632 [2024-10-13 01:43:22.160694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:36.632 [2024-10-13 01:43:22.161500] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:36.632 [2024-10-13 01:43:22.161561] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.632 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:36.891 [2024-10-13 01:43:22.248747] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:36.891 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:32:36.891 [2024-10-13 01:43:22.314757] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:36.891 [2024-10-13 01:43:22.314796] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:36.891 [2024-10-13 01:43:22.314814] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.894 [2024-10-13 01:43:23.376976] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:37.894 [2024-10-13 01:43:23.377020] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.894 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.894 [2024-10-13 01:43:23.385539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.894 [2024-10-13 01:43:23.385587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.894 [2024-10-13 01:43:23.385606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.894 [2024-10-13 01:43:23.385620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.894 [2024-10-13 01:43:23.385635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.894 [2024-10-13 01:43:23.385648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.894 [2024-10-13 01:43:23.385662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.894 [2024-10-13 01:43:23.385676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.895 [2024-10-13 01:43:23.385689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15452b0 is same with the state(6) to be set 00:32:37.895 [2024-10-13 01:43:23.395544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15452b0 (9): Bad file descriptor 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.895 [2024-10-13 01:43:23.405588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:37.895 [2024-10-13 01:43:23.405773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.895 [2024-10-13 01:43:23.405803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15452b0 with addr=10.0.0.2, port=4420 00:32:37.895 [2024-10-13 01:43:23.405820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15452b0 is same with the state(6) to be set 00:32:37.895 [2024-10-13 01:43:23.405843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15452b0 (9): Bad file descriptor 00:32:37.895 [2024-10-13 01:43:23.405865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:37.895 [2024-10-13 01:43:23.405880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:37.895 [2024-10-13 01:43:23.405904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:37.895 [2024-10-13 01:43:23.405926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.895 [2024-10-13 01:43:23.415666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:37.895 [2024-10-13 01:43:23.415869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.895 [2024-10-13 01:43:23.415898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15452b0 with addr=10.0.0.2, port=4420 00:32:37.895 [2024-10-13 01:43:23.415915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15452b0 is same with the state(6) to be set 00:32:37.895 [2024-10-13 01:43:23.415938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15452b0 (9): Bad file descriptor 00:32:37.895 [2024-10-13 01:43:23.415958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:37.895 [2024-10-13 01:43:23.415972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:37.895 [2024-10-13 01:43:23.415985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:37.895 [2024-10-13 01:43:23.416005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.895 [2024-10-13 01:43:23.425748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:37.895 [2024-10-13 01:43:23.425959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.895 [2024-10-13 01:43:23.425988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15452b0 with addr=10.0.0.2, port=4420 00:32:37.895 [2024-10-13 01:43:23.426005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15452b0 is same with the state(6) to be set 00:32:37.895 [2024-10-13 01:43:23.426029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15452b0 (9): Bad file descriptor 00:32:37.895 [2024-10-13 01:43:23.426050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:37.895 [2024-10-13 01:43:23.426064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:37.895 [2024-10-13 01:43:23.426077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:37.895 [2024-10-13 01:43:23.426098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.895 [2024-10-13 01:43:23.435842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:37.895 [2024-10-13 01:43:23.436022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.895 [2024-10-13 01:43:23.436055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15452b0 with addr=10.0.0.2, port=4420 00:32:37.895 [2024-10-13 01:43:23.436074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15452b0 is same with the state(6) to be set 00:32:37.895 [2024-10-13 01:43:23.436112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15452b0 (9): Bad file descriptor 00:32:37.895 [2024-10-13 01:43:23.436139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:37.895 [2024-10-13 01:43:23.436155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:37.895 [2024-10-13 01:43:23.436170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:37.895 [2024-10-13 01:43:23.436192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.895 [2024-10-13 01:43:23.445926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:37.895 [2024-10-13 01:43:23.446073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.895 [2024-10-13 01:43:23.446104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15452b0 with addr=10.0.0.2, port=4420 00:32:37.895 [2024-10-13 01:43:23.446123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15452b0 is same with the state(6) to be set 00:32:37.895 [2024-10-13 01:43:23.446148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15452b0 (9): Bad file descriptor 00:32:37.895 [2024-10-13 01:43:23.446171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:37.895 [2024-10-13 01:43:23.446187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:37.895 [2024-10-13 01:43:23.446202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:37.895 [2024-10-13 01:43:23.446223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.895 [2024-10-13 01:43:23.456004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:37.895 [2024-10-13 01:43:23.456153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:37.895 [2024-10-13 01:43:23.456182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15452b0 with addr=10.0.0.2, port=4420 00:32:37.895 [2024-10-13 01:43:23.456199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15452b0 is same with the state(6) to be set 00:32:37.895 [2024-10-13 01:43:23.456221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15452b0 (9): Bad file descriptor 00:32:37.895 [2024-10-13 01:43:23.456242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:37.895 [2024-10-13 01:43:23.456256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:37.895 [2024-10-13 01:43:23.456270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:37.895 [2024-10-13 01:43:23.456290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:37.895 [2024-10-13 01:43:23.464016] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:37.895 [2024-10-13 01:43:23.464053] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:37.895 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:38.154 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.155 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.531 [2024-10-13 01:43:24.718716] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:39.531 [2024-10-13 01:43:24.718752] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:39.531 [2024-10-13 01:43:24.718778] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:39.531 [2024-10-13 01:43:24.806063] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:39.531 [2024-10-13 01:43:24.954360] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:39.531 [2024-10-13 01:43:24.954413] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.531 request: 00:32:39.531 { 00:32:39.531 "name": "nvme", 00:32:39.531 "trtype": "tcp", 00:32:39.531 "traddr": "10.0.0.2", 00:32:39.531 "adrfam": "ipv4", 00:32:39.531 "trsvcid": "8009", 00:32:39.531 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:39.531 "wait_for_attach": true, 00:32:39.531 "method": "bdev_nvme_start_discovery", 00:32:39.531 "req_id": 1 00:32:39.531 } 00:32:39.531 Got JSON-RPC error response 00:32:39.531 response: 00:32:39.531 { 00:32:39.531 "code": -17, 00:32:39.531 "message": "File exists" 00:32:39.531 } 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:39.531 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.531 request: 00:32:39.531 { 00:32:39.531 "name": "nvme_second", 00:32:39.531 "trtype": "tcp", 00:32:39.531 "traddr": "10.0.0.2", 00:32:39.531 "adrfam": "ipv4", 00:32:39.531 "trsvcid": "8009", 00:32:39.531 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:39.531 "wait_for_attach": true, 00:32:39.531 "method": "bdev_nvme_start_discovery", 00:32:39.531 "req_id": 1 00:32:39.531 } 00:32:39.531 Got JSON-RPC error response 00:32:39.531 response: 00:32:39.531 { 00:32:39.531 "code": -17, 00:32:39.531 "message": "File exists" 00:32:39.531 } 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.531 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:39.789 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:40.722 [2024-10-13 01:43:26.153814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:40.722 [2024-10-13 01:43:26.153882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1575040 with addr=10.0.0.2, port=8010 00:32:40.722 [2024-10-13 01:43:26.153910] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:40.722 [2024-10-13 01:43:26.153926] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:40.722 [2024-10-13 01:43:26.153938] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:41.656 [2024-10-13 01:43:27.156172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:41.656 [2024-10-13 01:43:27.156214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1575040 with addr=10.0.0.2, port=8010 00:32:41.656 [2024-10-13 01:43:27.156240] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:41.656 [2024-10-13 01:43:27.156255] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:41.656 [2024-10-13 01:43:27.156268] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:42.591 [2024-10-13 01:43:28.158477] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:42.591 request: 00:32:42.591 { 00:32:42.591 "name": "nvme_second", 00:32:42.591 "trtype": "tcp", 00:32:42.591 "traddr": "10.0.0.2", 00:32:42.591 "adrfam": "ipv4", 00:32:42.591 "trsvcid": "8010", 00:32:42.591 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:42.591 "wait_for_attach": false, 00:32:42.591 "attach_timeout_ms": 3000, 00:32:42.591 "method": "bdev_nvme_start_discovery", 00:32:42.591 "req_id": 1 00:32:42.591 } 00:32:42.591 Got JSON-RPC error response 00:32:42.591 response: 00:32:42.591 { 00:32:42.591 "code": -110, 00:32:42.591 "message": "Connection timed out" 00:32:42.591 } 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.591 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1726023 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.850 rmmod nvme_tcp 00:32:42.850 rmmod nvme_fabrics 00:32:42.850 rmmod nvme_keyring 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 1726004 ']' 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 1726004 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1726004 ']' 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1726004 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1726004 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1726004' 00:32:42.850 killing process with pid 1726004 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1726004 00:32:42.850 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1726004 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.108 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.009 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:45.009 00:32:45.009 real 0m14.026s 00:32:45.009 user 0m20.866s 00:32:45.009 sys 0m2.727s 00:32:45.009 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.009 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:45.009 ************************************ 00:32:45.009 END TEST nvmf_host_discovery 00:32:45.009 ************************************ 00:32:45.009 01:43:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:45.009 01:43:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:45.009 01:43:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:45.009 01:43:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.267 ************************************ 00:32:45.267 START TEST nvmf_host_multipath_status 00:32:45.267 ************************************ 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:45.267 * Looking for test storage... 00:32:45.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:45.267 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:45.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.268 --rc genhtml_branch_coverage=1 00:32:45.268 --rc genhtml_function_coverage=1 00:32:45.268 --rc genhtml_legend=1 00:32:45.268 --rc geninfo_all_blocks=1 00:32:45.268 --rc geninfo_unexecuted_blocks=1 00:32:45.268 00:32:45.268 ' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:45.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.268 --rc genhtml_branch_coverage=1 00:32:45.268 --rc genhtml_function_coverage=1 00:32:45.268 --rc genhtml_legend=1 00:32:45.268 --rc geninfo_all_blocks=1 00:32:45.268 --rc geninfo_unexecuted_blocks=1 00:32:45.268 00:32:45.268 ' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:45.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.268 --rc genhtml_branch_coverage=1 00:32:45.268 --rc genhtml_function_coverage=1 00:32:45.268 --rc genhtml_legend=1 00:32:45.268 --rc geninfo_all_blocks=1 00:32:45.268 --rc geninfo_unexecuted_blocks=1 00:32:45.268 00:32:45.268 ' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:45.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.268 --rc genhtml_branch_coverage=1 00:32:45.268 --rc genhtml_function_coverage=1 00:32:45.268 --rc genhtml_legend=1 00:32:45.268 --rc geninfo_all_blocks=1 00:32:45.268 --rc geninfo_unexecuted_blocks=1 00:32:45.268 00:32:45.268 ' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:45.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.268 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:47.170 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:47.170 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:47.170 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:47.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:47.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.171 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:32:47.429 00:32:47.429 --- 10.0.0.2 ping statistics --- 00:32:47.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.429 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:32:47.429 00:32:47.429 --- 10.0.0.1 ping statistics --- 00:32:47.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.429 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:47.429 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=1729192 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 1729192 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1729192 ']' 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:47.430 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:47.430 [2024-10-13 01:43:32.921406] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:32:47.430 [2024-10-13 01:43:32.921509] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.430 [2024-10-13 01:43:32.989016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:47.687 [2024-10-13 01:43:33.040177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.687 [2024-10-13 01:43:33.040230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.687 [2024-10-13 01:43:33.040244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.687 [2024-10-13 01:43:33.040254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.687 [2024-10-13 01:43:33.040268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.687 [2024-10-13 01:43:33.041871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.687 [2024-10-13 01:43:33.041876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.687 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.687 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:47.688 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:47.688 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:47.688 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:47.688 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.688 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1729192 00:32:47.688 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:47.945 [2024-10-13 01:43:33.469230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.945 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:48.204 Malloc0 00:32:48.462 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:48.720 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:48.978 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:49.236 [2024-10-13 01:43:34.681293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.236 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:49.494 [2024-10-13 01:43:34.945985] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1729483 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1729483 /var/tmp/bdevperf.sock 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1729483 ']' 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:49.494 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:49.752 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:49.752 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:32:49.752 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:50.010 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:50.575 Nvme0n1 00:32:50.575 01:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:51.141 Nvme0n1 00:32:51.141 01:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:51.141 01:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:53.040 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:53.040 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:53.297 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:53.556 01:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:54.490 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:54.490 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:54.490 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:54.490 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:55.056 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.056 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:55.056 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.056 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:55.056 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:55.056 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:55.056 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.056 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:55.315 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.315 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:55.315 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.315 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:55.574 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:55.574 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:55.574 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:55.574 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:56.140 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.140 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:56.140 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:56.140 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:56.140 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:56.140 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:56.140 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:56.398 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:56.656 01:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:32:58.030 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:32:58.030 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:32:58.030 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.030 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:58.030 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.030 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:58.030 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.030 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:58.288 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.288 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:58.288 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.288 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:58.546 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.546 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:58.546 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.546 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:58.805 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.805 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:58.805 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.805 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:59.370 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.370 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:59.370 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.370 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:59.370 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.370 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:32:59.370 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:59.629 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:00.195 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:01.128 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:01.128 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:01.128 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.128 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:01.386 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.386 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:01.386 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.386 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:01.644 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:01.644 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:01.644 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.644 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:01.902 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.902 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:01.902 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.902 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:02.160 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.160 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:02.160 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.160 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:02.418 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.418 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:02.418 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.418 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:02.676 01:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:02.676 01:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:02.676 01:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:02.934 01:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:03.192 01:43:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:04.125 01:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:04.125 01:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:04.125 01:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.125 01:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:04.383 01:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:04.383 01:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:04.383 01:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.383 01:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:04.641 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:04.641 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:04.641 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:04.641 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.207 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.208 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.208 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.208 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.208 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.208 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.208 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.208 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.466 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.466 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:05.466 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.466 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:06.032 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:06.032 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:06.032 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:06.032 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:06.289 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:07.662 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:07.662 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:07.662 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.662 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.662 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.662 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:07.662 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.662 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.920 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.920 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:07.920 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.920 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:08.177 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.177 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:08.177 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.177 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.435 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.435 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:08.435 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.435 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.692 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.692 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:08.692 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.692 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.951 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.951 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:08.951 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:09.250 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:09.533 01:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:10.466 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:10.466 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:10.466 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.466 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:11.031 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:11.031 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:11.031 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.031 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:11.289 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.289 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:11.289 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.289 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:11.546 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.546 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:11.546 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.546 01:43:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:11.804 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:11.804 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:11.804 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:11.804 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:12.062 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:12.062 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:12.062 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.062 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:12.319 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.319 01:43:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:12.577 01:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:12.577 01:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:12.834 01:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:13.091 01:43:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:14.025 01:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:14.025 01:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:14.025 01:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.025 01:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:14.590 01:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.590 01:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:14.590 01:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.590 01:43:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:14.590 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:14.590 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:14.848 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.848 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:15.106 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.106 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:15.106 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.106 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:15.364 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.364 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:15.364 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.364 01:44:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:15.621 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.621 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:15.621 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.621 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:15.879 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.879 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:15.879 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:16.137 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:16.394 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:17.327 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:17.327 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:17.327 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.327 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:17.585 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:17.585 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:17.585 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.585 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:17.842 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.842 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:17.842 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.842 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:18.100 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.100 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:18.100 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.100 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:18.358 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.358 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:18.358 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.358 01:44:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:18.923 01:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.923 01:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:18.923 01:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:18.923 01:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:18.923 01:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.924 01:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:18.924 01:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:19.181 01:44:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:19.754 01:44:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:20.687 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:20.687 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:20.687 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.688 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:20.945 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.945 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:20.945 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.945 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:21.203 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.203 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:21.203 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.203 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:21.461 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.461 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:21.461 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.461 01:44:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:21.719 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.719 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:21.719 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.719 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:21.977 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.977 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:21.977 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.977 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:22.235 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:22.235 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:22.235 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:22.493 01:44:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:22.751 01:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:23.684 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:23.684 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:23.684 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.684 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:24.250 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.250 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:24.250 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.250 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:24.250 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:24.250 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:24.250 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.250 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:24.509 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.509 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:24.509 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:24.509 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:25.074 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.074 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:25.074 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.074 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:25.074 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.074 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:25.074 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.074 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:25.333 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1729483 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1729483 ']' 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1729483 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1729483 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1729483' 00:33:25.594 killing process with pid 1729483 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1729483 00:33:25.594 01:44:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1729483 00:33:25.594 { 00:33:25.594 "results": [ 00:33:25.594 { 00:33:25.594 "job": "Nvme0n1", 00:33:25.594 "core_mask": "0x4", 00:33:25.594 "workload": "verify", 00:33:25.594 "status": "terminated", 00:33:25.594 "verify_range": { 00:33:25.594 "start": 0, 00:33:25.594 "length": 16384 00:33:25.594 }, 00:33:25.594 "queue_depth": 128, 00:33:25.594 "io_size": 4096, 00:33:25.594 "runtime": 34.281862, 00:33:25.594 "iops": 8030.952344420499, 00:33:25.594 "mibps": 31.370907595392573, 00:33:25.594 "io_failed": 0, 00:33:25.594 "io_timeout": 0, 00:33:25.595 "avg_latency_us": 15910.61186452954, 00:33:25.595 "min_latency_us": 213.90222222222224, 00:33:25.595 "max_latency_us": 4026531.84 00:33:25.595 } 00:33:25.595 ], 00:33:25.595 "core_count": 1 00:33:25.595 } 00:33:25.595 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1729483 00:33:25.595 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:25.595 [2024-10-13 01:43:35.006355] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:33:25.595 [2024-10-13 01:43:35.006436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729483 ] 00:33:25.595 [2024-10-13 01:43:35.064343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.595 [2024-10-13 01:43:35.111841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:25.595 Running I/O for 90 seconds... 00:33:25.595 8496.00 IOPS, 33.19 MiB/s [2024-10-12T23:44:11.173Z] 8576.50 IOPS, 33.50 MiB/s [2024-10-12T23:44:11.173Z] 8556.33 IOPS, 33.42 MiB/s [2024-10-12T23:44:11.173Z] 8577.75 IOPS, 33.51 MiB/s [2024-10-12T23:44:11.173Z] 8613.60 IOPS, 33.65 MiB/s [2024-10-12T23:44:11.173Z] 8608.17 IOPS, 33.63 MiB/s [2024-10-12T23:44:11.173Z] 8609.71 IOPS, 33.63 MiB/s [2024-10-12T23:44:11.173Z] 8600.62 IOPS, 33.60 MiB/s [2024-10-12T23:44:11.173Z] 8592.44 IOPS, 33.56 MiB/s [2024-10-12T23:44:11.173Z] 8587.20 IOPS, 33.54 MiB/s [2024-10-12T23:44:11.173Z] 8589.09 IOPS, 33.55 MiB/s [2024-10-12T23:44:11.173Z] 8585.17 IOPS, 33.54 MiB/s [2024-10-12T23:44:11.173Z] 8579.38 IOPS, 33.51 MiB/s [2024-10-12T23:44:11.173Z] 8578.36 IOPS, 33.51 MiB/s [2024-10-12T23:44:11.173Z] [2024-10-13 01:43:51.575533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.595 [2024-10-13 01:43:51.575592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.575673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.575703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.575742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.575792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.575828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.575855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.575891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.575918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.575954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.575979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.576954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.576980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:25.595 [2024-10-13 01:43:51.577686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.595 [2024-10-13 01:43:51.577712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.577751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.577792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.577829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.577861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.578941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.578968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.579950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.579995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.596 [2024-10-13 01:43:51.580769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:25.596 [2024-10-13 01:43:51.580822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.580854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.580891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.580918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.580956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.580984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.581943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.581970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.582965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.582995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:25.597 [2024-10-13 01:43:51.583872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.597 [2024-10-13 01:43:51.583899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.583941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.598 [2024-10-13 01:43:51.583967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.598 [2024-10-13 01:43:51.584037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.598 [2024-10-13 01:43:51.584107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.598 [2024-10-13 01:43:51.584177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.598 [2024-10-13 01:43:51.584245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.598 [2024-10-13 01:43:51.584312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.598 [2024-10-13 01:43:51.584380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.584450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.584562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.584641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.584715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.584802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.584874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.584941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.584984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.585011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.585054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.585081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:43:51.585122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:43:51.585149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:25.598 8545.27 IOPS, 33.38 MiB/s [2024-10-12T23:44:11.176Z] 8011.19 IOPS, 31.29 MiB/s [2024-10-12T23:44:11.176Z] 7539.94 IOPS, 29.45 MiB/s [2024-10-12T23:44:11.176Z] 7121.06 IOPS, 27.82 MiB/s [2024-10-12T23:44:11.176Z] 6769.05 IOPS, 26.44 MiB/s [2024-10-12T23:44:11.176Z] 6861.10 IOPS, 26.80 MiB/s [2024-10-12T23:44:11.176Z] 6944.67 IOPS, 27.13 MiB/s [2024-10-12T23:44:11.176Z] 7051.95 IOPS, 27.55 MiB/s [2024-10-12T23:44:11.176Z] 7236.96 IOPS, 28.27 MiB/s [2024-10-12T23:44:11.176Z] 7387.00 IOPS, 28.86 MiB/s [2024-10-12T23:44:11.176Z] 7526.28 IOPS, 29.40 MiB/s [2024-10-12T23:44:11.176Z] 7563.15 IOPS, 29.54 MiB/s [2024-10-12T23:44:11.176Z] 7597.22 IOPS, 29.68 MiB/s [2024-10-12T23:44:11.176Z] 7630.54 IOPS, 29.81 MiB/s [2024-10-12T23:44:11.176Z] 7717.34 IOPS, 30.15 MiB/s [2024-10-12T23:44:11.176Z] 7828.07 IOPS, 30.58 MiB/s [2024-10-12T23:44:11.176Z] 7930.06 IOPS, 30.98 MiB/s [2024-10-12T23:44:11.176Z] [2024-10-13 01:44:08.240806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.240879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.240979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.241952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.241993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.242028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.242057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.242092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.242118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.242168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.242194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.242244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.242271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.242306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.242333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:25.598 [2024-10-13 01:44:08.242368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.598 [2024-10-13 01:44:08.242397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.242435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.242467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.242514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.242555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.242590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.242617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.242653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.242694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.599 [2024-10-13 01:44:08.245145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:51512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:51560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:51576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.245958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.245992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.246947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.246974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:25.599 [2024-10-13 01:44:08.247007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.599 [2024-10-13 01:44:08.247033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:25.600 [2024-10-13 01:44:08.247659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.600 [2024-10-13 01:44:08.247688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:25.600 8000.22 IOPS, 31.25 MiB/s [2024-10-12T23:44:11.178Z] 8019.30 IOPS, 31.33 MiB/s [2024-10-12T23:44:11.178Z] 8029.79 IOPS, 31.37 MiB/s [2024-10-12T23:44:11.178Z] Received shutdown signal, test time was about 34.282642 seconds 00:33:25.600 00:33:25.600 Latency(us) 00:33:25.600 [2024-10-12T23:44:11.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.600 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:25.600 Verification LBA range: start 0x0 length 0x4000 00:33:25.600 Nvme0n1 : 34.28 8030.95 31.37 0.00 0.00 15910.61 213.90 4026531.84 00:33:25.600 [2024-10-12T23:44:11.178Z] =================================================================================================================== 00:33:25.600 [2024-10-12T23:44:11.178Z] Total : 8030.95 31.37 0.00 0.00 15910.61 213.90 4026531.84 00:33:25.600 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:25.857 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:25.857 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:25.857 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:25.857 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:25.857 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:25.857 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:25.857 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:25.857 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:25.857 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:25.857 rmmod nvme_tcp 00:33:26.114 rmmod nvme_fabrics 00:33:26.114 rmmod nvme_keyring 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 1729192 ']' 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 1729192 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1729192 ']' 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1729192 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1729192 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1729192' 00:33:26.114 killing process with pid 1729192 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1729192 00:33:26.114 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1729192 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.373 01:44:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.273 01:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:28.273 00:33:28.273 real 0m43.204s 00:33:28.273 user 2m12.132s 00:33:28.273 sys 0m10.541s 00:33:28.273 01:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:28.273 01:44:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:28.273 ************************************ 00:33:28.273 END TEST nvmf_host_multipath_status 00:33:28.273 ************************************ 00:33:28.273 01:44:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:28.273 01:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:28.273 01:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:28.273 01:44:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:28.532 ************************************ 00:33:28.532 START TEST nvmf_discovery_remove_ifc 00:33:28.532 ************************************ 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:28.532 * Looking for test storage... 00:33:28.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.532 --rc genhtml_branch_coverage=1 00:33:28.532 --rc genhtml_function_coverage=1 00:33:28.532 --rc genhtml_legend=1 00:33:28.532 --rc geninfo_all_blocks=1 00:33:28.532 --rc geninfo_unexecuted_blocks=1 00:33:28.532 00:33:28.532 ' 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.532 --rc genhtml_branch_coverage=1 00:33:28.532 --rc genhtml_function_coverage=1 00:33:28.532 --rc genhtml_legend=1 00:33:28.532 --rc geninfo_all_blocks=1 00:33:28.532 --rc geninfo_unexecuted_blocks=1 00:33:28.532 00:33:28.532 ' 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.532 --rc genhtml_branch_coverage=1 00:33:28.532 --rc genhtml_function_coverage=1 00:33:28.532 --rc genhtml_legend=1 00:33:28.532 --rc geninfo_all_blocks=1 00:33:28.532 --rc geninfo_unexecuted_blocks=1 00:33:28.532 00:33:28.532 ' 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:28.532 --rc genhtml_branch_coverage=1 00:33:28.532 --rc genhtml_function_coverage=1 00:33:28.532 --rc genhtml_legend=1 00:33:28.532 --rc geninfo_all_blocks=1 00:33:28.532 --rc geninfo_unexecuted_blocks=1 00:33:28.532 00:33:28.532 ' 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:28.532 01:44:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.532 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:28.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:28.533 01:44:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:30.434 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:30.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:30.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:30.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:30.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:30.435 01:44:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:30.435 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:30.435 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:30.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:33:30.694 00:33:30.694 --- 10.0.0.2 ping statistics --- 00:33:30.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.694 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:30.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:33:30.694 00:33:30.694 --- 10.0.0.1 ping statistics --- 00:33:30.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.694 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=1735825 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 1735825 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1735825 ']' 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:30.694 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.694 [2024-10-13 01:44:16.199444] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:33:30.694 [2024-10-13 01:44:16.199576] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.694 [2024-10-13 01:44:16.269819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.952 [2024-10-13 01:44:16.318251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.953 [2024-10-13 01:44:16.318324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.953 [2024-10-13 01:44:16.318349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.953 [2024-10-13 01:44:16.318363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.953 [2024-10-13 01:44:16.318374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.953 [2024-10-13 01:44:16.319040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.953 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:30.953 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:30.953 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:30.953 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:30.953 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.953 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.953 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:30.953 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.953 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.953 [2024-10-13 01:44:16.479869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.953 [2024-10-13 01:44:16.488106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:30.953 null0 00:33:30.953 [2024-10-13 01:44:16.519978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1735970 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1735970 /tmp/host.sock 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1735970 ']' 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:31.211 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:31.211 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.211 [2024-10-13 01:44:16.588430] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:33:31.211 [2024-10-13 01:44:16.588540] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735970 ] 00:33:31.211 [2024-10-13 01:44:16.645424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.211 [2024-10-13 01:44:16.692259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.469 01:44:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.841 [2024-10-13 01:44:17.998614] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:32.841 [2024-10-13 01:44:17.998658] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:32.841 [2024-10-13 01:44:17.998684] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:32.842 [2024-10-13 01:44:18.085966] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:32.842 [2024-10-13 01:44:18.270060] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:32.842 [2024-10-13 01:44:18.270141] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:32.842 [2024-10-13 01:44:18.270186] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:32.842 [2024-10-13 01:44:18.270219] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:32.842 [2024-10-13 01:44:18.270257] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:32.842 [2024-10-13 01:44:18.276344] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x8b6c90 was disconnected and freed. delete nvme_qpair. 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:32.842 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:33.099 01:44:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:34.032 01:44:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:34.965 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.965 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.965 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.965 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:34.965 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.965 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.965 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.965 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:35.222 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:35.222 01:44:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.152 01:44:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:37.085 01:44:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:38.457 01:44:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.457 [2024-10-13 01:44:23.711463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:38.457 [2024-10-13 01:44:23.711564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.457 [2024-10-13 01:44:23.711587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.457 [2024-10-13 01:44:23.711606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.457 [2024-10-13 01:44:23.711620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.457 [2024-10-13 01:44:23.711633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.457 [2024-10-13 01:44:23.711646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.457 [2024-10-13 01:44:23.711659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.457 [2024-10-13 01:44:23.711672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.457 [2024-10-13 01:44:23.711686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:38.457 [2024-10-13 01:44:23.711698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:38.457 [2024-10-13 01:44:23.711720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8934c0 is same with the state(6) to be set 00:33:38.457 [2024-10-13 01:44:23.721486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8934c0 (9): Bad file descriptor 00:33:38.457 [2024-10-13 01:44:23.731548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:39.390 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.390 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.390 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.390 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:39.390 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.390 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.390 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.390 [2024-10-13 01:44:24.762506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:39.390 [2024-10-13 01:44:24.762555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8934c0 with addr=10.0.0.2, port=4420 00:33:39.390 [2024-10-13 01:44:24.762577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8934c0 is same with the state(6) to be set 00:33:39.390 [2024-10-13 01:44:24.762615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8934c0 (9): Bad file descriptor 00:33:39.390 [2024-10-13 01:44:24.762995] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:33:39.390 [2024-10-13 01:44:24.763045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:39.390 [2024-10-13 01:44:24.763061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:39.390 [2024-10-13 01:44:24.763078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:39.391 [2024-10-13 01:44:24.763104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.391 [2024-10-13 01:44:24.763119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:39.391 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:39.391 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:39.391 01:44:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:40.377 [2024-10-13 01:44:25.765624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:40.377 [2024-10-13 01:44:25.765681] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:40.377 [2024-10-13 01:44:25.765696] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:40.377 [2024-10-13 01:44:25.765711] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:33:40.377 [2024-10-13 01:44:25.765751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.377 [2024-10-13 01:44:25.765800] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:40.377 [2024-10-13 01:44:25.765866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.377 [2024-10-13 01:44:25.765889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.377 [2024-10-13 01:44:25.765920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.377 [2024-10-13 01:44:25.765933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.377 [2024-10-13 01:44:25.765947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.377 [2024-10-13 01:44:25.765959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.378 [2024-10-13 01:44:25.765972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.378 [2024-10-13 01:44:25.765984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.378 [2024-10-13 01:44:25.765997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:40.378 [2024-10-13 01:44:25.766009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:40.378 [2024-10-13 01:44:25.766023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:33:40.378 [2024-10-13 01:44:25.766227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x882c00 (9): Bad file descriptor 00:33:40.378 [2024-10-13 01:44:25.767252] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:40.378 [2024-10-13 01:44:25.767280] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:40.378 01:44:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:41.750 01:44:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.316 [2024-10-13 01:44:27.823597] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:42.316 [2024-10-13 01:44:27.823636] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:42.316 [2024-10-13 01:44:27.823661] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:42.582 [2024-10-13 01:44:27.910955] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:42.582 01:44:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:42.582 01:44:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.582 01:44:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:42.582 01:44:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.582 01:44:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:42.582 01:44:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:42.582 01:44:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:42.582 01:44:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.582 [2024-10-13 01:44:28.016160] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:42.582 [2024-10-13 01:44:28.016217] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:42.582 [2024-10-13 01:44:28.016255] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:42.582 [2024-10-13 01:44:28.016283] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:42.582 [2024-10-13 01:44:28.016298] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:42.582 01:44:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:42.582 01:44:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:42.582 [2024-10-13 01:44:28.022164] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x8925a0 was disconnected and freed. delete nvme_qpair. 00:33:43.515 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:43.515 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.515 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:43.515 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:43.515 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.515 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:43.515 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:43.516 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:43.516 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:43.516 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:43.516 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1735970 00:33:43.516 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1735970 ']' 00:33:43.516 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1735970 00:33:43.516 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:43.516 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.516 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1735970 00:33:43.773 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:43.773 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:43.773 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1735970' 00:33:43.773 killing process with pid 1735970 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1735970 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1735970 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:43.774 rmmod nvme_tcp 00:33:43.774 rmmod nvme_fabrics 00:33:43.774 rmmod nvme_keyring 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 1735825 ']' 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 1735825 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1735825 ']' 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1735825 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:43.774 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1735825 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1735825' 00:33:44.033 killing process with pid 1735825 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1735825 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1735825 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:44.033 01:44:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.567 01:44:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:46.567 00:33:46.567 real 0m17.741s 00:33:46.567 user 0m25.917s 00:33:46.567 sys 0m2.925s 00:33:46.567 01:44:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:46.568 ************************************ 00:33:46.568 END TEST nvmf_discovery_remove_ifc 00:33:46.568 ************************************ 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:46.568 ************************************ 00:33:46.568 START TEST nvmf_identify_kernel_target 00:33:46.568 ************************************ 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:46.568 * Looking for test storage... 00:33:46.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.568 --rc genhtml_branch_coverage=1 00:33:46.568 --rc genhtml_function_coverage=1 00:33:46.568 --rc genhtml_legend=1 00:33:46.568 --rc geninfo_all_blocks=1 00:33:46.568 --rc geninfo_unexecuted_blocks=1 00:33:46.568 00:33:46.568 ' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.568 --rc genhtml_branch_coverage=1 00:33:46.568 --rc genhtml_function_coverage=1 00:33:46.568 --rc genhtml_legend=1 00:33:46.568 --rc geninfo_all_blocks=1 00:33:46.568 --rc geninfo_unexecuted_blocks=1 00:33:46.568 00:33:46.568 ' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.568 --rc genhtml_branch_coverage=1 00:33:46.568 --rc genhtml_function_coverage=1 00:33:46.568 --rc genhtml_legend=1 00:33:46.568 --rc geninfo_all_blocks=1 00:33:46.568 --rc geninfo_unexecuted_blocks=1 00:33:46.568 00:33:46.568 ' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:46.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:46.568 --rc genhtml_branch_coverage=1 00:33:46.568 --rc genhtml_function_coverage=1 00:33:46.568 --rc genhtml_legend=1 00:33:46.568 --rc geninfo_all_blocks=1 00:33:46.568 --rc geninfo_unexecuted_blocks=1 00:33:46.568 00:33:46.568 ' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:46.568 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:46.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:46.569 01:44:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:48.470 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:48.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:48.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:48.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:48.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:48.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:48.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:33:48.471 00:33:48.471 --- 10.0.0.2 ping statistics --- 00:33:48.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.471 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:48.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:48.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:33:48.471 00:33:48.471 --- 10.0.0.1 ping statistics --- 00:33:48.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:48.471 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:48.471 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:48.472 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:48.472 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:48.472 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:48.472 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:48.472 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:33:48.472 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:48.472 01:44:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:48.472 01:44:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:48.472 01:44:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:49.848 Waiting for block devices as requested 00:33:49.848 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:33:49.848 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:49.848 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:50.106 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:50.106 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:50.106 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:50.106 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:50.106 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:50.364 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:50.364 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:33:50.364 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:33:50.364 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:33:50.622 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:33:50.622 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:33:50.622 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:33:50.622 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:33:50.881 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:50.881 No valid GPT data, bailing 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:50.881 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:33:51.141 00:33:51.141 Discovery Log Number of Records 2, Generation counter 2 00:33:51.141 =====Discovery Log Entry 0====== 00:33:51.141 trtype: tcp 00:33:51.141 adrfam: ipv4 00:33:51.141 subtype: current discovery subsystem 00:33:51.141 treq: not specified, sq flow control disable supported 00:33:51.141 portid: 1 00:33:51.141 trsvcid: 4420 00:33:51.141 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:51.141 traddr: 10.0.0.1 00:33:51.141 eflags: none 00:33:51.141 sectype: none 00:33:51.141 =====Discovery Log Entry 1====== 00:33:51.141 trtype: tcp 00:33:51.141 adrfam: ipv4 00:33:51.141 subtype: nvme subsystem 00:33:51.141 treq: not specified, sq flow control disable supported 00:33:51.141 portid: 1 00:33:51.141 trsvcid: 4420 00:33:51.141 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:51.141 traddr: 10.0.0.1 00:33:51.141 eflags: none 00:33:51.141 sectype: none 00:33:51.141 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:51.141 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:51.141 ===================================================== 00:33:51.141 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:51.141 ===================================================== 00:33:51.141 Controller Capabilities/Features 00:33:51.141 ================================ 00:33:51.141 Vendor ID: 0000 00:33:51.141 Subsystem Vendor ID: 0000 00:33:51.141 Serial Number: fcf7d2852b5256a097e8 00:33:51.141 Model Number: Linux 00:33:51.141 Firmware Version: 6.8.9-20 00:33:51.141 Recommended Arb Burst: 0 00:33:51.141 IEEE OUI Identifier: 00 00 00 00:33:51.141 Multi-path I/O 00:33:51.141 May have multiple subsystem ports: No 00:33:51.141 May have multiple controllers: No 00:33:51.141 Associated with SR-IOV VF: No 00:33:51.141 Max Data Transfer Size: Unlimited 00:33:51.141 Max Number of Namespaces: 0 00:33:51.141 Max Number of I/O Queues: 1024 00:33:51.141 NVMe Specification Version (VS): 1.3 00:33:51.141 NVMe Specification Version (Identify): 1.3 00:33:51.141 Maximum Queue Entries: 1024 00:33:51.141 Contiguous Queues Required: No 00:33:51.141 Arbitration Mechanisms Supported 00:33:51.141 Weighted Round Robin: Not Supported 00:33:51.141 Vendor Specific: Not Supported 00:33:51.141 Reset Timeout: 7500 ms 00:33:51.141 Doorbell Stride: 4 bytes 00:33:51.141 NVM Subsystem Reset: Not Supported 00:33:51.141 Command Sets Supported 00:33:51.141 NVM Command Set: Supported 00:33:51.141 Boot Partition: Not Supported 00:33:51.141 Memory Page Size Minimum: 4096 bytes 00:33:51.141 Memory Page Size Maximum: 4096 bytes 00:33:51.141 Persistent Memory Region: Not Supported 00:33:51.141 Optional Asynchronous Events Supported 00:33:51.141 Namespace Attribute Notices: Not Supported 00:33:51.141 Firmware Activation Notices: Not Supported 00:33:51.141 ANA Change Notices: Not Supported 00:33:51.141 PLE Aggregate Log Change Notices: Not Supported 00:33:51.141 LBA Status Info Alert Notices: Not Supported 00:33:51.141 EGE Aggregate Log Change Notices: Not Supported 00:33:51.141 Normal NVM Subsystem Shutdown event: Not Supported 00:33:51.141 Zone Descriptor Change Notices: Not Supported 00:33:51.141 Discovery Log Change Notices: Supported 00:33:51.141 Controller Attributes 00:33:51.141 128-bit Host Identifier: Not Supported 00:33:51.141 Non-Operational Permissive Mode: Not Supported 00:33:51.141 NVM Sets: Not Supported 00:33:51.141 Read Recovery Levels: Not Supported 00:33:51.141 Endurance Groups: Not Supported 00:33:51.141 Predictable Latency Mode: Not Supported 00:33:51.141 Traffic Based Keep ALive: Not Supported 00:33:51.141 Namespace Granularity: Not Supported 00:33:51.141 SQ Associations: Not Supported 00:33:51.141 UUID List: Not Supported 00:33:51.141 Multi-Domain Subsystem: Not Supported 00:33:51.142 Fixed Capacity Management: Not Supported 00:33:51.142 Variable Capacity Management: Not Supported 00:33:51.142 Delete Endurance Group: Not Supported 00:33:51.142 Delete NVM Set: Not Supported 00:33:51.142 Extended LBA Formats Supported: Not Supported 00:33:51.142 Flexible Data Placement Supported: Not Supported 00:33:51.142 00:33:51.142 Controller Memory Buffer Support 00:33:51.142 ================================ 00:33:51.142 Supported: No 00:33:51.142 00:33:51.142 Persistent Memory Region Support 00:33:51.142 ================================ 00:33:51.142 Supported: No 00:33:51.142 00:33:51.142 Admin Command Set Attributes 00:33:51.142 ============================ 00:33:51.142 Security Send/Receive: Not Supported 00:33:51.142 Format NVM: Not Supported 00:33:51.142 Firmware Activate/Download: Not Supported 00:33:51.142 Namespace Management: Not Supported 00:33:51.142 Device Self-Test: Not Supported 00:33:51.142 Directives: Not Supported 00:33:51.142 NVMe-MI: Not Supported 00:33:51.142 Virtualization Management: Not Supported 00:33:51.142 Doorbell Buffer Config: Not Supported 00:33:51.142 Get LBA Status Capability: Not Supported 00:33:51.142 Command & Feature Lockdown Capability: Not Supported 00:33:51.142 Abort Command Limit: 1 00:33:51.142 Async Event Request Limit: 1 00:33:51.142 Number of Firmware Slots: N/A 00:33:51.142 Firmware Slot 1 Read-Only: N/A 00:33:51.142 Firmware Activation Without Reset: N/A 00:33:51.142 Multiple Update Detection Support: N/A 00:33:51.142 Firmware Update Granularity: No Information Provided 00:33:51.142 Per-Namespace SMART Log: No 00:33:51.142 Asymmetric Namespace Access Log Page: Not Supported 00:33:51.142 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:51.142 Command Effects Log Page: Not Supported 00:33:51.142 Get Log Page Extended Data: Supported 00:33:51.142 Telemetry Log Pages: Not Supported 00:33:51.142 Persistent Event Log Pages: Not Supported 00:33:51.142 Supported Log Pages Log Page: May Support 00:33:51.142 Commands Supported & Effects Log Page: Not Supported 00:33:51.142 Feature Identifiers & Effects Log Page:May Support 00:33:51.142 NVMe-MI Commands & Effects Log Page: May Support 00:33:51.142 Data Area 4 for Telemetry Log: Not Supported 00:33:51.142 Error Log Page Entries Supported: 1 00:33:51.142 Keep Alive: Not Supported 00:33:51.142 00:33:51.142 NVM Command Set Attributes 00:33:51.142 ========================== 00:33:51.142 Submission Queue Entry Size 00:33:51.142 Max: 1 00:33:51.142 Min: 1 00:33:51.142 Completion Queue Entry Size 00:33:51.142 Max: 1 00:33:51.142 Min: 1 00:33:51.142 Number of Namespaces: 0 00:33:51.142 Compare Command: Not Supported 00:33:51.142 Write Uncorrectable Command: Not Supported 00:33:51.142 Dataset Management Command: Not Supported 00:33:51.142 Write Zeroes Command: Not Supported 00:33:51.142 Set Features Save Field: Not Supported 00:33:51.142 Reservations: Not Supported 00:33:51.142 Timestamp: Not Supported 00:33:51.142 Copy: Not Supported 00:33:51.142 Volatile Write Cache: Not Present 00:33:51.142 Atomic Write Unit (Normal): 1 00:33:51.142 Atomic Write Unit (PFail): 1 00:33:51.142 Atomic Compare & Write Unit: 1 00:33:51.142 Fused Compare & Write: Not Supported 00:33:51.142 Scatter-Gather List 00:33:51.142 SGL Command Set: Supported 00:33:51.142 SGL Keyed: Not Supported 00:33:51.142 SGL Bit Bucket Descriptor: Not Supported 00:33:51.142 SGL Metadata Pointer: Not Supported 00:33:51.142 Oversized SGL: Not Supported 00:33:51.142 SGL Metadata Address: Not Supported 00:33:51.142 SGL Offset: Supported 00:33:51.142 Transport SGL Data Block: Not Supported 00:33:51.142 Replay Protected Memory Block: Not Supported 00:33:51.142 00:33:51.142 Firmware Slot Information 00:33:51.142 ========================= 00:33:51.142 Active slot: 0 00:33:51.142 00:33:51.142 00:33:51.142 Error Log 00:33:51.142 ========= 00:33:51.142 00:33:51.142 Active Namespaces 00:33:51.142 ================= 00:33:51.142 Discovery Log Page 00:33:51.142 ================== 00:33:51.142 Generation Counter: 2 00:33:51.142 Number of Records: 2 00:33:51.142 Record Format: 0 00:33:51.142 00:33:51.142 Discovery Log Entry 0 00:33:51.142 ---------------------- 00:33:51.142 Transport Type: 3 (TCP) 00:33:51.142 Address Family: 1 (IPv4) 00:33:51.142 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:51.142 Entry Flags: 00:33:51.142 Duplicate Returned Information: 0 00:33:51.142 Explicit Persistent Connection Support for Discovery: 0 00:33:51.142 Transport Requirements: 00:33:51.142 Secure Channel: Not Specified 00:33:51.142 Port ID: 1 (0x0001) 00:33:51.142 Controller ID: 65535 (0xffff) 00:33:51.142 Admin Max SQ Size: 32 00:33:51.142 Transport Service Identifier: 4420 00:33:51.142 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:51.142 Transport Address: 10.0.0.1 00:33:51.142 Discovery Log Entry 1 00:33:51.142 ---------------------- 00:33:51.142 Transport Type: 3 (TCP) 00:33:51.142 Address Family: 1 (IPv4) 00:33:51.142 Subsystem Type: 2 (NVM Subsystem) 00:33:51.142 Entry Flags: 00:33:51.142 Duplicate Returned Information: 0 00:33:51.142 Explicit Persistent Connection Support for Discovery: 0 00:33:51.142 Transport Requirements: 00:33:51.142 Secure Channel: Not Specified 00:33:51.142 Port ID: 1 (0x0001) 00:33:51.142 Controller ID: 65535 (0xffff) 00:33:51.142 Admin Max SQ Size: 32 00:33:51.142 Transport Service Identifier: 4420 00:33:51.142 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:51.142 Transport Address: 10.0.0.1 00:33:51.142 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:51.142 get_feature(0x01) failed 00:33:51.142 get_feature(0x02) failed 00:33:51.142 get_feature(0x04) failed 00:33:51.142 ===================================================== 00:33:51.142 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:51.142 ===================================================== 00:33:51.142 Controller Capabilities/Features 00:33:51.142 ================================ 00:33:51.142 Vendor ID: 0000 00:33:51.142 Subsystem Vendor ID: 0000 00:33:51.142 Serial Number: 5d0b65ab9d6ed6853358 00:33:51.142 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:51.142 Firmware Version: 6.8.9-20 00:33:51.142 Recommended Arb Burst: 6 00:33:51.142 IEEE OUI Identifier: 00 00 00 00:33:51.142 Multi-path I/O 00:33:51.142 May have multiple subsystem ports: Yes 00:33:51.142 May have multiple controllers: Yes 00:33:51.142 Associated with SR-IOV VF: No 00:33:51.142 Max Data Transfer Size: Unlimited 00:33:51.142 Max Number of Namespaces: 1024 00:33:51.142 Max Number of I/O Queues: 128 00:33:51.142 NVMe Specification Version (VS): 1.3 00:33:51.142 NVMe Specification Version (Identify): 1.3 00:33:51.142 Maximum Queue Entries: 1024 00:33:51.142 Contiguous Queues Required: No 00:33:51.142 Arbitration Mechanisms Supported 00:33:51.142 Weighted Round Robin: Not Supported 00:33:51.142 Vendor Specific: Not Supported 00:33:51.142 Reset Timeout: 7500 ms 00:33:51.142 Doorbell Stride: 4 bytes 00:33:51.142 NVM Subsystem Reset: Not Supported 00:33:51.142 Command Sets Supported 00:33:51.142 NVM Command Set: Supported 00:33:51.142 Boot Partition: Not Supported 00:33:51.142 Memory Page Size Minimum: 4096 bytes 00:33:51.142 Memory Page Size Maximum: 4096 bytes 00:33:51.142 Persistent Memory Region: Not Supported 00:33:51.142 Optional Asynchronous Events Supported 00:33:51.142 Namespace Attribute Notices: Supported 00:33:51.142 Firmware Activation Notices: Not Supported 00:33:51.142 ANA Change Notices: Supported 00:33:51.142 PLE Aggregate Log Change Notices: Not Supported 00:33:51.142 LBA Status Info Alert Notices: Not Supported 00:33:51.142 EGE Aggregate Log Change Notices: Not Supported 00:33:51.142 Normal NVM Subsystem Shutdown event: Not Supported 00:33:51.142 Zone Descriptor Change Notices: Not Supported 00:33:51.142 Discovery Log Change Notices: Not Supported 00:33:51.142 Controller Attributes 00:33:51.142 128-bit Host Identifier: Supported 00:33:51.142 Non-Operational Permissive Mode: Not Supported 00:33:51.142 NVM Sets: Not Supported 00:33:51.142 Read Recovery Levels: Not Supported 00:33:51.142 Endurance Groups: Not Supported 00:33:51.142 Predictable Latency Mode: Not Supported 00:33:51.142 Traffic Based Keep ALive: Supported 00:33:51.142 Namespace Granularity: Not Supported 00:33:51.142 SQ Associations: Not Supported 00:33:51.142 UUID List: Not Supported 00:33:51.142 Multi-Domain Subsystem: Not Supported 00:33:51.142 Fixed Capacity Management: Not Supported 00:33:51.142 Variable Capacity Management: Not Supported 00:33:51.142 Delete Endurance Group: Not Supported 00:33:51.142 Delete NVM Set: Not Supported 00:33:51.142 Extended LBA Formats Supported: Not Supported 00:33:51.142 Flexible Data Placement Supported: Not Supported 00:33:51.142 00:33:51.142 Controller Memory Buffer Support 00:33:51.142 ================================ 00:33:51.142 Supported: No 00:33:51.142 00:33:51.142 Persistent Memory Region Support 00:33:51.142 ================================ 00:33:51.142 Supported: No 00:33:51.142 00:33:51.142 Admin Command Set Attributes 00:33:51.142 ============================ 00:33:51.143 Security Send/Receive: Not Supported 00:33:51.143 Format NVM: Not Supported 00:33:51.143 Firmware Activate/Download: Not Supported 00:33:51.143 Namespace Management: Not Supported 00:33:51.143 Device Self-Test: Not Supported 00:33:51.143 Directives: Not Supported 00:33:51.143 NVMe-MI: Not Supported 00:33:51.143 Virtualization Management: Not Supported 00:33:51.143 Doorbell Buffer Config: Not Supported 00:33:51.143 Get LBA Status Capability: Not Supported 00:33:51.143 Command & Feature Lockdown Capability: Not Supported 00:33:51.143 Abort Command Limit: 4 00:33:51.143 Async Event Request Limit: 4 00:33:51.143 Number of Firmware Slots: N/A 00:33:51.143 Firmware Slot 1 Read-Only: N/A 00:33:51.143 Firmware Activation Without Reset: N/A 00:33:51.143 Multiple Update Detection Support: N/A 00:33:51.143 Firmware Update Granularity: No Information Provided 00:33:51.143 Per-Namespace SMART Log: Yes 00:33:51.143 Asymmetric Namespace Access Log Page: Supported 00:33:51.143 ANA Transition Time : 10 sec 00:33:51.143 00:33:51.143 Asymmetric Namespace Access Capabilities 00:33:51.143 ANA Optimized State : Supported 00:33:51.143 ANA Non-Optimized State : Supported 00:33:51.143 ANA Inaccessible State : Supported 00:33:51.143 ANA Persistent Loss State : Supported 00:33:51.143 ANA Change State : Supported 00:33:51.143 ANAGRPID is not changed : No 00:33:51.143 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:51.143 00:33:51.143 ANA Group Identifier Maximum : 128 00:33:51.143 Number of ANA Group Identifiers : 128 00:33:51.143 Max Number of Allowed Namespaces : 1024 00:33:51.143 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:51.143 Command Effects Log Page: Supported 00:33:51.143 Get Log Page Extended Data: Supported 00:33:51.143 Telemetry Log Pages: Not Supported 00:33:51.143 Persistent Event Log Pages: Not Supported 00:33:51.143 Supported Log Pages Log Page: May Support 00:33:51.143 Commands Supported & Effects Log Page: Not Supported 00:33:51.143 Feature Identifiers & Effects Log Page:May Support 00:33:51.143 NVMe-MI Commands & Effects Log Page: May Support 00:33:51.143 Data Area 4 for Telemetry Log: Not Supported 00:33:51.143 Error Log Page Entries Supported: 128 00:33:51.143 Keep Alive: Supported 00:33:51.143 Keep Alive Granularity: 1000 ms 00:33:51.143 00:33:51.143 NVM Command Set Attributes 00:33:51.143 ========================== 00:33:51.143 Submission Queue Entry Size 00:33:51.143 Max: 64 00:33:51.143 Min: 64 00:33:51.143 Completion Queue Entry Size 00:33:51.143 Max: 16 00:33:51.143 Min: 16 00:33:51.143 Number of Namespaces: 1024 00:33:51.143 Compare Command: Not Supported 00:33:51.143 Write Uncorrectable Command: Not Supported 00:33:51.143 Dataset Management Command: Supported 00:33:51.143 Write Zeroes Command: Supported 00:33:51.143 Set Features Save Field: Not Supported 00:33:51.143 Reservations: Not Supported 00:33:51.143 Timestamp: Not Supported 00:33:51.143 Copy: Not Supported 00:33:51.143 Volatile Write Cache: Present 00:33:51.143 Atomic Write Unit (Normal): 1 00:33:51.143 Atomic Write Unit (PFail): 1 00:33:51.143 Atomic Compare & Write Unit: 1 00:33:51.143 Fused Compare & Write: Not Supported 00:33:51.143 Scatter-Gather List 00:33:51.143 SGL Command Set: Supported 00:33:51.143 SGL Keyed: Not Supported 00:33:51.143 SGL Bit Bucket Descriptor: Not Supported 00:33:51.143 SGL Metadata Pointer: Not Supported 00:33:51.143 Oversized SGL: Not Supported 00:33:51.143 SGL Metadata Address: Not Supported 00:33:51.143 SGL Offset: Supported 00:33:51.143 Transport SGL Data Block: Not Supported 00:33:51.143 Replay Protected Memory Block: Not Supported 00:33:51.143 00:33:51.143 Firmware Slot Information 00:33:51.143 ========================= 00:33:51.143 Active slot: 0 00:33:51.143 00:33:51.143 Asymmetric Namespace Access 00:33:51.143 =========================== 00:33:51.143 Change Count : 0 00:33:51.143 Number of ANA Group Descriptors : 1 00:33:51.143 ANA Group Descriptor : 0 00:33:51.143 ANA Group ID : 1 00:33:51.143 Number of NSID Values : 1 00:33:51.143 Change Count : 0 00:33:51.143 ANA State : 1 00:33:51.143 Namespace Identifier : 1 00:33:51.143 00:33:51.143 Commands Supported and Effects 00:33:51.143 ============================== 00:33:51.143 Admin Commands 00:33:51.143 -------------- 00:33:51.143 Get Log Page (02h): Supported 00:33:51.143 Identify (06h): Supported 00:33:51.143 Abort (08h): Supported 00:33:51.143 Set Features (09h): Supported 00:33:51.143 Get Features (0Ah): Supported 00:33:51.143 Asynchronous Event Request (0Ch): Supported 00:33:51.143 Keep Alive (18h): Supported 00:33:51.143 I/O Commands 00:33:51.143 ------------ 00:33:51.143 Flush (00h): Supported 00:33:51.143 Write (01h): Supported LBA-Change 00:33:51.143 Read (02h): Supported 00:33:51.143 Write Zeroes (08h): Supported LBA-Change 00:33:51.143 Dataset Management (09h): Supported 00:33:51.143 00:33:51.143 Error Log 00:33:51.143 ========= 00:33:51.143 Entry: 0 00:33:51.143 Error Count: 0x3 00:33:51.143 Submission Queue Id: 0x0 00:33:51.143 Command Id: 0x5 00:33:51.143 Phase Bit: 0 00:33:51.143 Status Code: 0x2 00:33:51.143 Status Code Type: 0x0 00:33:51.143 Do Not Retry: 1 00:33:51.143 Error Location: 0x28 00:33:51.143 LBA: 0x0 00:33:51.143 Namespace: 0x0 00:33:51.143 Vendor Log Page: 0x0 00:33:51.143 ----------- 00:33:51.143 Entry: 1 00:33:51.143 Error Count: 0x2 00:33:51.143 Submission Queue Id: 0x0 00:33:51.143 Command Id: 0x5 00:33:51.143 Phase Bit: 0 00:33:51.143 Status Code: 0x2 00:33:51.143 Status Code Type: 0x0 00:33:51.143 Do Not Retry: 1 00:33:51.143 Error Location: 0x28 00:33:51.143 LBA: 0x0 00:33:51.143 Namespace: 0x0 00:33:51.143 Vendor Log Page: 0x0 00:33:51.143 ----------- 00:33:51.143 Entry: 2 00:33:51.143 Error Count: 0x1 00:33:51.143 Submission Queue Id: 0x0 00:33:51.143 Command Id: 0x4 00:33:51.143 Phase Bit: 0 00:33:51.143 Status Code: 0x2 00:33:51.143 Status Code Type: 0x0 00:33:51.143 Do Not Retry: 1 00:33:51.143 Error Location: 0x28 00:33:51.143 LBA: 0x0 00:33:51.143 Namespace: 0x0 00:33:51.143 Vendor Log Page: 0x0 00:33:51.143 00:33:51.143 Number of Queues 00:33:51.143 ================ 00:33:51.143 Number of I/O Submission Queues: 128 00:33:51.143 Number of I/O Completion Queues: 128 00:33:51.143 00:33:51.143 ZNS Specific Controller Data 00:33:51.143 ============================ 00:33:51.143 Zone Append Size Limit: 0 00:33:51.143 00:33:51.143 00:33:51.143 Active Namespaces 00:33:51.143 ================= 00:33:51.143 get_feature(0x05) failed 00:33:51.143 Namespace ID:1 00:33:51.143 Command Set Identifier: NVM (00h) 00:33:51.143 Deallocate: Supported 00:33:51.143 Deallocated/Unwritten Error: Not Supported 00:33:51.143 Deallocated Read Value: Unknown 00:33:51.143 Deallocate in Write Zeroes: Not Supported 00:33:51.143 Deallocated Guard Field: 0xFFFF 00:33:51.143 Flush: Supported 00:33:51.143 Reservation: Not Supported 00:33:51.143 Namespace Sharing Capabilities: Multiple Controllers 00:33:51.143 Size (in LBAs): 1953525168 (931GiB) 00:33:51.143 Capacity (in LBAs): 1953525168 (931GiB) 00:33:51.143 Utilization (in LBAs): 1953525168 (931GiB) 00:33:51.143 UUID: c2fdba95-bd28-46e2-81dc-8a3bd5d4ec91 00:33:51.143 Thin Provisioning: Not Supported 00:33:51.143 Per-NS Atomic Units: Yes 00:33:51.143 Atomic Boundary Size (Normal): 0 00:33:51.143 Atomic Boundary Size (PFail): 0 00:33:51.143 Atomic Boundary Offset: 0 00:33:51.143 NGUID/EUI64 Never Reused: No 00:33:51.143 ANA group ID: 1 00:33:51.143 Namespace Write Protected: No 00:33:51.143 Number of LBA Formats: 1 00:33:51.143 Current LBA Format: LBA Format #00 00:33:51.143 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:51.143 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:51.143 rmmod nvme_tcp 00:33:51.143 rmmod nvme_fabrics 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:33:51.143 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:51.144 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:33:51.402 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:51.402 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:51.402 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.402 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.402 01:44:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.300 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:33:53.301 01:44:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:54.675 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:54.675 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:54.675 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:54.675 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:54.675 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:54.675 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:54.675 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:54.675 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:54.675 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:54.675 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:54.675 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:54.675 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:54.675 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:54.675 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:54.675 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:54.675 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:55.610 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:55.610 00:33:55.610 real 0m9.471s 00:33:55.610 user 0m2.049s 00:33:55.610 sys 0m3.516s 00:33:55.610 01:44:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:55.611 01:44:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:55.611 ************************************ 00:33:55.611 END TEST nvmf_identify_kernel_target 00:33:55.611 ************************************ 00:33:55.611 01:44:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:55.611 01:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:55.611 01:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:55.611 01:44:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:55.611 ************************************ 00:33:55.611 START TEST nvmf_auth_host 00:33:55.611 ************************************ 00:33:55.611 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:55.870 * Looking for test storage... 00:33:55.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:55.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.870 --rc genhtml_branch_coverage=1 00:33:55.870 --rc genhtml_function_coverage=1 00:33:55.870 --rc genhtml_legend=1 00:33:55.870 --rc geninfo_all_blocks=1 00:33:55.870 --rc geninfo_unexecuted_blocks=1 00:33:55.870 00:33:55.870 ' 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:55.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.870 --rc genhtml_branch_coverage=1 00:33:55.870 --rc genhtml_function_coverage=1 00:33:55.870 --rc genhtml_legend=1 00:33:55.870 --rc geninfo_all_blocks=1 00:33:55.870 --rc geninfo_unexecuted_blocks=1 00:33:55.870 00:33:55.870 ' 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:55.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.870 --rc genhtml_branch_coverage=1 00:33:55.870 --rc genhtml_function_coverage=1 00:33:55.870 --rc genhtml_legend=1 00:33:55.870 --rc geninfo_all_blocks=1 00:33:55.870 --rc geninfo_unexecuted_blocks=1 00:33:55.870 00:33:55.870 ' 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:55.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.870 --rc genhtml_branch_coverage=1 00:33:55.870 --rc genhtml_function_coverage=1 00:33:55.870 --rc genhtml_legend=1 00:33:55.870 --rc geninfo_all_blocks=1 00:33:55.870 --rc geninfo_unexecuted_blocks=1 00:33:55.870 00:33:55.870 ' 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:55.870 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:55.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:55.871 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.772 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:57.773 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:57.773 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:57.773 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:57.773 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:57.773 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:33:58.031 00:33:58.031 --- 10.0.0.2 ping statistics --- 00:33:58.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.031 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:33:58.031 00:33:58.031 --- 10.0.0.1 ping statistics --- 00:33:58.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.031 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=1743062 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 1743062 00:33:58.031 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1743062 ']' 00:33:58.032 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.032 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:58.032 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.032 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:58.032 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=6e4102296ecc70e1328743aafb90b41b 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.433 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 6e4102296ecc70e1328743aafb90b41b 0 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 6e4102296ecc70e1328743aafb90b41b 0 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=6e4102296ecc70e1328743aafb90b41b 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.433 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.433 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.433 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=abc678afa3ef1cbcb6e31c0f2dc9ac586503096ba1839bee9e8da57a7c601ee2 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Ysa 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key abc678afa3ef1cbcb6e31c0f2dc9ac586503096ba1839bee9e8da57a7c601ee2 3 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 abc678afa3ef1cbcb6e31c0f2dc9ac586503096ba1839bee9e8da57a7c601ee2 3 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=abc678afa3ef1cbcb6e31c0f2dc9ac586503096ba1839bee9e8da57a7c601ee2 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:58.290 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Ysa 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Ysa 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Ysa 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b5fb84ce5a0accc57742607c4b180eda10789e7031614e14 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.zZD 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b5fb84ce5a0accc57742607c4b180eda10789e7031614e14 0 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b5fb84ce5a0accc57742607c4b180eda10789e7031614e14 0 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b5fb84ce5a0accc57742607c4b180eda10789e7031614e14 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.zZD 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.zZD 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.zZD 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e4095c0e4ae1034285c16dc6b49ad18bccd9ee11d08ca8bc 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.iSY 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e4095c0e4ae1034285c16dc6b49ad18bccd9ee11d08ca8bc 2 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e4095c0e4ae1034285c16dc6b49ad18bccd9ee11d08ca8bc 2 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e4095c0e4ae1034285c16dc6b49ad18bccd9ee11d08ca8bc 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:58.549 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.iSY 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.iSY 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.iSY 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:58.549 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5f48d64e991ea5e9e6bf89b5ed6dcde9 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.FIx 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5f48d64e991ea5e9e6bf89b5ed6dcde9 1 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5f48d64e991ea5e9e6bf89b5ed6dcde9 1 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5f48d64e991ea5e9e6bf89b5ed6dcde9 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.FIx 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.FIx 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FIx 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c1f17fe53f3977df4fd0c10b0f9bbc4c 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Vtv 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c1f17fe53f3977df4fd0c10b0f9bbc4c 1 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c1f17fe53f3977df4fd0c10b0f9bbc4c 1 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c1f17fe53f3977df4fd0c10b0f9bbc4c 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Vtv 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Vtv 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Vtv 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:33:58.550 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f1c70f185a8954a9db819a9039272af7bdee7157b267d82c 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.p8c 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f1c70f185a8954a9db819a9039272af7bdee7157b267d82c 2 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f1c70f185a8954a9db819a9039272af7bdee7157b267d82c 2 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f1c70f185a8954a9db819a9039272af7bdee7157b267d82c 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.p8c 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.p8c 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.p8c 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2ff1c93eeb8cbcc84d29ae3438f24d45 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Y5b 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2ff1c93eeb8cbcc84d29ae3438f24d45 0 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2ff1c93eeb8cbcc84d29ae3438f24d45 0 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2ff1c93eeb8cbcc84d29ae3438f24d45 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Y5b 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Y5b 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Y5b 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=1708276894f84b1c4af535c3b906d10ccbd5af1c92ca591ed777744fda52944f 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.vZx 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 1708276894f84b1c4af535c3b906d10ccbd5af1c92ca591ed777744fda52944f 3 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 1708276894f84b1c4af535c3b906d10ccbd5af1c92ca591ed777744fda52944f 3 00:33:58.808 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=1708276894f84b1c4af535c3b906d10ccbd5af1c92ca591ed777744fda52944f 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.vZx 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.vZx 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.vZx 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1743062 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1743062 ']' 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:58.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:58.809 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.433 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Ysa ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ysa 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.zZD 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.iSY ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iSY 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FIx 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Vtv ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Vtv 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.p8c 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Y5b ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Y5b 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.vZx 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:33:59.067 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:33:59.325 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:59.325 01:44:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:00.258 Waiting for block devices as requested 00:34:00.258 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:00.515 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:00.516 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:00.773 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:00.773 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:00.773 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:00.773 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:01.031 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:01.031 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:01.031 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:01.031 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:01.289 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:01.289 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:01.289 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:01.289 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:01.547 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:01.547 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:01.804 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:34:01.804 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:01.804 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:34:01.804 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:01.804 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:01.804 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:01.804 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:34:01.804 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:01.804 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:02.062 No valid GPT data, bailing 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:02.062 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:02.062 00:34:02.062 Discovery Log Number of Records 2, Generation counter 2 00:34:02.062 =====Discovery Log Entry 0====== 00:34:02.062 trtype: tcp 00:34:02.062 adrfam: ipv4 00:34:02.062 subtype: current discovery subsystem 00:34:02.062 treq: not specified, sq flow control disable supported 00:34:02.062 portid: 1 00:34:02.062 trsvcid: 4420 00:34:02.062 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:02.062 traddr: 10.0.0.1 00:34:02.062 eflags: none 00:34:02.062 sectype: none 00:34:02.062 =====Discovery Log Entry 1====== 00:34:02.062 trtype: tcp 00:34:02.062 adrfam: ipv4 00:34:02.062 subtype: nvme subsystem 00:34:02.062 treq: not specified, sq flow control disable supported 00:34:02.062 portid: 1 00:34:02.062 trsvcid: 4420 00:34:02.062 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:02.063 traddr: 10.0.0.1 00:34:02.063 eflags: none 00:34:02.063 sectype: none 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.063 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.320 nvme0n1 00:34:02.320 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.320 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.320 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.320 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.320 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.320 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.320 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.320 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.320 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.321 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.579 nvme0n1 00:34:02.579 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.579 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.579 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.579 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.579 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.579 01:44:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.579 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.837 nvme0n1 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:02.837 nvme0n1 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:02.837 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:03.095 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.096 nvme0n1 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.096 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.354 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.355 nvme0n1 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.355 01:44:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.613 nvme0n1 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.613 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.871 nvme0n1 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:03.871 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.142 nvme0n1 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.142 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.401 nvme0n1 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.401 01:44:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.659 nvme0n1 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:04.659 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:04.661 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:04.662 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.228 nvme0n1 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.228 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.486 nvme0n1 00:34:05.486 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.486 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.486 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.487 01:44:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.745 nvme0n1 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:05.745 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:05.746 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.004 nvme0n1 00:34:06.004 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.004 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.004 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.004 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.004 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:06.262 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.263 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.521 nvme0n1 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:06.521 01:44:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:06.521 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.087 nvme0n1 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.087 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.088 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.088 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.088 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.088 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.088 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.088 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:07.088 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.088 01:44:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.654 nvme0n1 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:07.654 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:07.655 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.220 nvme0n1 00:34:08.220 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.220 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:08.221 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.221 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.221 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:08.221 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.221 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:08.221 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:08.221 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.221 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:08.479 01:44:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.045 nvme0n1 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:09.045 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.046 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.611 nvme0n1 00:34:09.611 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.611 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.611 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.611 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.611 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.611 01:44:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:09.611 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:09.612 01:44:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.546 nvme0n1 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.546 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.480 nvme0n1 00:34:11.480 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.480 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.480 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.480 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.480 01:44:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.480 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.744 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.750 nvme0n1 00:34:12.750 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.750 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.750 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.750 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.750 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.750 01:44:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.750 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.683 nvme0n1 00:34:13.683 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.683 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.683 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.683 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.683 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.683 01:44:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.683 01:44:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.617 nvme0n1 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.617 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.876 nvme0n1 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.876 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.135 nvme0n1 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.135 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.394 nvme0n1 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.394 nvme0n1 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.394 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.653 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.653 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.653 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.653 01:45:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.653 nvme0n1 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.653 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.912 nvme0n1 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.912 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.171 nvme0n1 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.171 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.429 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.430 nvme0n1 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.430 01:45:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.430 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.430 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.430 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.430 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.688 nvme0n1 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.688 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.946 nvme0n1 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.946 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.204 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.462 nvme0n1 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:17.462 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.463 01:45:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.721 nvme0n1 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.721 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.288 nvme0n1 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.288 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.546 nvme0n1 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.546 01:45:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.804 nvme0n1 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:18.805 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.370 nvme0n1 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:19.370 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.371 01:45:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.937 nvme0n1 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:19.937 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.504 nvme0n1 00:34:20.504 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.504 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.504 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.504 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.504 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.504 01:45:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.504 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.070 nvme0n1 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.070 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.071 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.071 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.071 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.071 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:21.071 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.071 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.329 01:45:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.895 nvme0n1 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:21.895 01:45:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.830 nvme0n1 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.830 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.831 01:45:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.765 nvme0n1 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:23.765 01:45:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.699 nvme0n1 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.699 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.957 01:45:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.891 nvme0n1 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:25.891 01:45:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.826 nvme0n1 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.826 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.085 nvme0n1 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.085 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.343 nvme0n1 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:27.343 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.344 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 nvme0n1 00:34:27.602 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.602 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.602 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.602 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.602 01:45:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.602 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.861 nvme0n1 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.861 nvme0n1 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.861 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.119 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.119 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.119 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.119 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.119 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.119 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.119 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.119 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.119 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.120 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.378 nvme0n1 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:28.378 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.379 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.637 nvme0n1 00:34:28.637 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.637 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.637 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.637 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.637 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.637 01:45:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.637 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.896 nvme0n1 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:28.896 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.154 nvme0n1 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.154 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.155 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.415 nvme0n1 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.415 01:45:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.674 nvme0n1 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.674 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.932 nvme0n1 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.933 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.190 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.190 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.191 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.449 nvme0n1 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.449 01:45:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.708 nvme0n1 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.708 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.275 nvme0n1 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.275 01:45:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.841 nvme0n1 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.841 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.407 nvme0n1 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:32.407 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.408 01:45:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.972 nvme0n1 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.972 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.973 01:45:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.538 nvme0n1 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:33.538 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:33.539 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:33.539 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:33.539 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.539 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.104 nvme0n1 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.104 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU0MTAyMjk2ZWNjNzBlMTMyODc0M2FhZmI5MGI0MWKvKrPV: 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: ]] 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWJjNjc4YWZhM2VmMWNiY2I2ZTMxYzBmMmRjOWFjNTg2NTAzMDk2YmExODM5YmVlOWU4ZGE1N2E3YzYwMWVlMieQyA0=: 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.362 01:45:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.297 nvme0n1 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.297 01:45:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.231 nvme0n1 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:36.231 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.232 01:45:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.165 nvme0n1 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZjFjNzBmMTg1YTg5NTRhOWRiODE5YTkwMzkyNzJhZjdiZGVlNzE1N2IyNjdkODJjE4hdIA==: 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: ]] 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmZmMWM5M2VlYjhjYmNjODRkMjlhZTM0MzhmMjRkNDVE0hFW: 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:37.165 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:37.166 01:45:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.539 nvme0n1 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTcwODI3Njg5NGY4NGIxYzRhZjUzNWMzYjkwNmQxMGNjYmQ1YWYxYzkyY2E1OTFlZDc3Nzc0NGZkYTUyOTQ0Zu5bmQM=: 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.539 01:45:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.474 nvme0n1 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:39.474 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.475 request: 00:34:39.475 { 00:34:39.475 "name": "nvme0", 00:34:39.475 "trtype": "tcp", 00:34:39.475 "traddr": "10.0.0.1", 00:34:39.475 "adrfam": "ipv4", 00:34:39.475 "trsvcid": "4420", 00:34:39.475 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:39.475 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:39.475 "prchk_reftag": false, 00:34:39.475 "prchk_guard": false, 00:34:39.475 "hdgst": false, 00:34:39.475 "ddgst": false, 00:34:39.475 "allow_unrecognized_csi": false, 00:34:39.475 "method": "bdev_nvme_attach_controller", 00:34:39.475 "req_id": 1 00:34:39.475 } 00:34:39.475 Got JSON-RPC error response 00:34:39.475 response: 00:34:39.475 { 00:34:39.475 "code": -5, 00:34:39.475 "message": "Input/output error" 00:34:39.475 } 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.475 01:45:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.733 request: 00:34:39.733 { 00:34:39.733 "name": "nvme0", 00:34:39.733 "trtype": "tcp", 00:34:39.733 "traddr": "10.0.0.1", 00:34:39.733 "adrfam": "ipv4", 00:34:39.733 "trsvcid": "4420", 00:34:39.733 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:39.733 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:39.733 "prchk_reftag": false, 00:34:39.733 "prchk_guard": false, 00:34:39.733 "hdgst": false, 00:34:39.733 "ddgst": false, 00:34:39.733 "dhchap_key": "key2", 00:34:39.733 "allow_unrecognized_csi": false, 00:34:39.733 "method": "bdev_nvme_attach_controller", 00:34:39.733 "req_id": 1 00:34:39.733 } 00:34:39.733 Got JSON-RPC error response 00:34:39.733 response: 00:34:39.733 { 00:34:39.733 "code": -5, 00:34:39.733 "message": "Input/output error" 00:34:39.733 } 00:34:39.733 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:39.733 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:39.733 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:39.733 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:39.733 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.734 request: 00:34:39.734 { 00:34:39.734 "name": "nvme0", 00:34:39.734 "trtype": "tcp", 00:34:39.734 "traddr": "10.0.0.1", 00:34:39.734 "adrfam": "ipv4", 00:34:39.734 "trsvcid": "4420", 00:34:39.734 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:39.734 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:39.734 "prchk_reftag": false, 00:34:39.734 "prchk_guard": false, 00:34:39.734 "hdgst": false, 00:34:39.734 "ddgst": false, 00:34:39.734 "dhchap_key": "key1", 00:34:39.734 "dhchap_ctrlr_key": "ckey2", 00:34:39.734 "allow_unrecognized_csi": false, 00:34:39.734 "method": "bdev_nvme_attach_controller", 00:34:39.734 "req_id": 1 00:34:39.734 } 00:34:39.734 Got JSON-RPC error response 00:34:39.734 response: 00:34:39.734 { 00:34:39.734 "code": -5, 00:34:39.734 "message": "Input/output error" 00:34:39.734 } 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.734 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.992 nvme0n1 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.992 request: 00:34:39.992 { 00:34:39.992 "name": "nvme0", 00:34:39.992 "dhchap_key": "key1", 00:34:39.992 "dhchap_ctrlr_key": "ckey2", 00:34:39.992 "method": "bdev_nvme_set_keys", 00:34:39.992 "req_id": 1 00:34:39.992 } 00:34:39.992 Got JSON-RPC error response 00:34:39.992 response: 00:34:39.992 { 00:34:39.992 "code": -13, 00:34:39.992 "message": "Permission denied" 00:34:39.992 } 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.992 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.250 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:40.250 01:45:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:41.181 01:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.181 01:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:41.181 01:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.181 01:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.181 01:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.181 01:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:41.181 01:45:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjVmYjg0Y2U1YTBhY2NjNTc3NDI2MDdjNGIxODBlZGExMDc4OWU3MDMxNjE0ZTE0eUdqVw==: 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: ]] 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTQwOTVjMGU0YWUxMDM0Mjg1YzE2ZGM2YjQ5YWQxOGJjY2Q5ZWUxMWQwOGNhOGJjwcZhkQ==: 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.172 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.430 nvme0n1 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWY0OGQ2NGU5OTFlYTVlOWU2YmY4OWI1ZWQ2ZGNkZTm/BOw5: 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: ]] 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzFmMTdmZTUzZjM5NzdkZjRmZDBjMTBiMGY5YmJjNGNX/R7z: 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.430 request: 00:34:42.430 { 00:34:42.430 "name": "nvme0", 00:34:42.430 "dhchap_key": "key2", 00:34:42.430 "dhchap_ctrlr_key": "ckey1", 00:34:42.430 "method": "bdev_nvme_set_keys", 00:34:42.430 "req_id": 1 00:34:42.430 } 00:34:42.430 Got JSON-RPC error response 00:34:42.430 response: 00:34:42.430 { 00:34:42.430 "code": -13, 00:34:42.430 "message": "Permission denied" 00:34:42.430 } 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:42.430 01:45:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:43.363 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.363 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:43.363 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.363 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.363 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:43.621 01:45:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:43.621 rmmod nvme_tcp 00:34:43.621 rmmod nvme_fabrics 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 1743062 ']' 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 1743062 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1743062 ']' 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1743062 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1743062 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:43.621 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1743062' 00:34:43.621 killing process with pid 1743062 00:34:43.622 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1743062 00:34:43.622 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1743062 00:34:43.879 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:34:43.879 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:34:43.879 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:34:43.879 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:43.879 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:34:43.880 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:34:43.880 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:34:43.880 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:43.880 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:43.880 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.880 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:43.880 01:45:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:34:45.779 01:45:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:47.153 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:47.153 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:47.153 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:47.153 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:47.153 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:47.153 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:47.153 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:47.154 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:47.154 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:47.154 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:47.154 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:47.154 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:47.413 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:47.413 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:47.413 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:47.413 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:48.350 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:48.350 01:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.433 /tmp/spdk.key-null.zZD /tmp/spdk.key-sha256.FIx /tmp/spdk.key-sha384.p8c /tmp/spdk.key-sha512.vZx /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:48.350 01:45:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:49.726 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:49.726 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:49.726 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:49.726 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:49.726 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:49.726 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:49.726 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:49.726 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:49.726 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:49.726 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:49.726 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:49.726 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:49.726 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:49.726 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:49.726 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:49.726 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:49.726 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:49.726 00:34:49.726 real 0m53.972s 00:34:49.726 user 0m51.403s 00:34:49.726 sys 0m6.038s 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.726 ************************************ 00:34:49.726 END TEST nvmf_auth_host 00:34:49.726 ************************************ 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.726 ************************************ 00:34:49.726 START TEST nvmf_digest 00:34:49.726 ************************************ 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:49.726 * Looking for test storage... 00:34:49.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:34:49.726 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:49.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.985 --rc genhtml_branch_coverage=1 00:34:49.985 --rc genhtml_function_coverage=1 00:34:49.985 --rc genhtml_legend=1 00:34:49.985 --rc geninfo_all_blocks=1 00:34:49.985 --rc geninfo_unexecuted_blocks=1 00:34:49.985 00:34:49.985 ' 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:49.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.985 --rc genhtml_branch_coverage=1 00:34:49.985 --rc genhtml_function_coverage=1 00:34:49.985 --rc genhtml_legend=1 00:34:49.985 --rc geninfo_all_blocks=1 00:34:49.985 --rc geninfo_unexecuted_blocks=1 00:34:49.985 00:34:49.985 ' 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:49.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.985 --rc genhtml_branch_coverage=1 00:34:49.985 --rc genhtml_function_coverage=1 00:34:49.985 --rc genhtml_legend=1 00:34:49.985 --rc geninfo_all_blocks=1 00:34:49.985 --rc geninfo_unexecuted_blocks=1 00:34:49.985 00:34:49.985 ' 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:49.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:49.985 --rc genhtml_branch_coverage=1 00:34:49.985 --rc genhtml_function_coverage=1 00:34:49.985 --rc genhtml_legend=1 00:34:49.985 --rc geninfo_all_blocks=1 00:34:49.985 --rc geninfo_unexecuted_blocks=1 00:34:49.985 00:34:49.985 ' 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.985 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:49.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:49.986 01:45:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:52.518 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:52.518 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:52.518 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:52.519 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:52.519 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:52.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:34:52.519 00:34:52.519 --- 10.0.0.2 ping statistics --- 00:34:52.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.519 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:34:52.519 00:34:52.519 --- 10.0.0.1 ping statistics --- 00:34:52.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.519 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:52.519 ************************************ 00:34:52.519 START TEST nvmf_digest_clean 00:34:52.519 ************************************ 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=1753551 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 1753551 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1753551 ']' 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:52.519 [2024-10-13 01:45:37.713726] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:34:52.519 [2024-10-13 01:45:37.713824] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.519 [2024-10-13 01:45:37.778417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.519 [2024-10-13 01:45:37.824891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.519 [2024-10-13 01:45:37.824950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.519 [2024-10-13 01:45:37.824963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.519 [2024-10-13 01:45:37.824974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.519 [2024-10-13 01:45:37.824983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.519 [2024-10-13 01:45:37.825587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:52.519 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:52.520 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:52.520 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:52.520 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:52.520 01:45:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:52.520 null0 00:34:52.520 [2024-10-13 01:45:38.084028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:52.779 [2024-10-13 01:45:38.108286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1753576 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1753576 /var/tmp/bperf.sock 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1753576 ']' 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:52.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:52.779 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:52.779 [2024-10-13 01:45:38.161707] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:34:52.779 [2024-10-13 01:45:38.161782] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1753576 ] 00:34:52.779 [2024-10-13 01:45:38.228369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.779 [2024-10-13 01:45:38.279615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.036 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:53.036 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:53.036 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:53.036 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:53.036 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:53.294 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:53.294 01:45:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:53.859 nvme0n1 00:34:53.859 01:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:53.859 01:45:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:53.859 Running I/O for 2 seconds... 00:34:56.167 18007.00 IOPS, 70.34 MiB/s [2024-10-12T23:45:41.745Z] 18122.50 IOPS, 70.79 MiB/s 00:34:56.167 Latency(us) 00:34:56.167 [2024-10-12T23:45:41.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.167 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:56.167 nvme0n1 : 2.01 18134.70 70.84 0.00 0.00 7048.68 3980.71 16699.54 00:34:56.167 [2024-10-12T23:45:41.745Z] =================================================================================================================== 00:34:56.167 [2024-10-12T23:45:41.745Z] Total : 18134.70 70.84 0.00 0.00 7048.68 3980.71 16699.54 00:34:56.167 { 00:34:56.167 "results": [ 00:34:56.167 { 00:34:56.167 "job": "nvme0n1", 00:34:56.167 "core_mask": "0x2", 00:34:56.167 "workload": "randread", 00:34:56.167 "status": "finished", 00:34:56.167 "queue_depth": 128, 00:34:56.167 "io_size": 4096, 00:34:56.167 "runtime": 2.005713, 00:34:56.167 "iops": 18134.69823449317, 00:34:56.167 "mibps": 70.83866497848895, 00:34:56.167 "io_failed": 0, 00:34:56.167 "io_timeout": 0, 00:34:56.167 "avg_latency_us": 7048.675671555316, 00:34:56.167 "min_latency_us": 3980.705185185185, 00:34:56.167 "max_latency_us": 16699.543703703705 00:34:56.167 } 00:34:56.167 ], 00:34:56.167 "core_count": 1 00:34:56.167 } 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:56.167 | select(.opcode=="crc32c") 00:34:56.167 | "\(.module_name) \(.executed)"' 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1753576 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1753576 ']' 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1753576 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1753576 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1753576' 00:34:56.167 killing process with pid 1753576 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1753576 00:34:56.167 Received shutdown signal, test time was about 2.000000 seconds 00:34:56.167 00:34:56.167 Latency(us) 00:34:56.167 [2024-10-12T23:45:41.745Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.167 [2024-10-12T23:45:41.745Z] =================================================================================================================== 00:34:56.167 [2024-10-12T23:45:41.745Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:56.167 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1753576 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1754103 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1754103 /var/tmp/bperf.sock 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1754103 ']' 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:56.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:56.425 01:45:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:56.425 [2024-10-13 01:45:41.911884] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:34:56.425 [2024-10-13 01:45:41.911963] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754103 ] 00:34:56.425 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:56.425 Zero copy mechanism will not be used. 00:34:56.425 [2024-10-13 01:45:41.971353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.683 [2024-10-13 01:45:42.021313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.683 01:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:56.683 01:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:34:56.683 01:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:56.683 01:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:56.683 01:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:57.250 01:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:57.250 01:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:57.507 nvme0n1 00:34:57.507 01:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:57.508 01:45:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:57.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.766 Zero copy mechanism will not be used. 00:34:57.766 Running I/O for 2 seconds... 00:34:59.634 4295.00 IOPS, 536.88 MiB/s [2024-10-12T23:45:45.212Z] 4297.50 IOPS, 537.19 MiB/s 00:34:59.634 Latency(us) 00:34:59.634 [2024-10-12T23:45:45.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.634 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:59.634 nvme0n1 : 2.00 4297.02 537.13 0.00 0.00 3718.94 940.56 12815.93 00:34:59.634 [2024-10-12T23:45:45.212Z] =================================================================================================================== 00:34:59.634 [2024-10-12T23:45:45.212Z] Total : 4297.02 537.13 0.00 0.00 3718.94 940.56 12815.93 00:34:59.634 { 00:34:59.634 "results": [ 00:34:59.634 { 00:34:59.634 "job": "nvme0n1", 00:34:59.634 "core_mask": "0x2", 00:34:59.634 "workload": "randread", 00:34:59.634 "status": "finished", 00:34:59.634 "queue_depth": 16, 00:34:59.634 "io_size": 131072, 00:34:59.634 "runtime": 2.003948, 00:34:59.634 "iops": 4297.017687085693, 00:34:59.634 "mibps": 537.1272108857116, 00:34:59.634 "io_failed": 0, 00:34:59.634 "io_timeout": 0, 00:34:59.634 "avg_latency_us": 3718.9446814367498, 00:34:59.634 "min_latency_us": 940.562962962963, 00:34:59.634 "max_latency_us": 12815.92888888889 00:34:59.634 } 00:34:59.634 ], 00:34:59.634 "core_count": 1 00:34:59.634 } 00:34:59.634 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:59.635 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:59.635 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:59.635 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:59.635 | select(.opcode=="crc32c") 00:34:59.635 | "\(.module_name) \(.executed)"' 00:34:59.635 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:59.892 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:59.892 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:59.892 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1754103 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1754103 ']' 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1754103 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1754103 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1754103' 00:34:59.893 killing process with pid 1754103 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1754103 00:34:59.893 Received shutdown signal, test time was about 2.000000 seconds 00:34:59.893 00:34:59.893 Latency(us) 00:34:59.893 [2024-10-12T23:45:45.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.893 [2024-10-12T23:45:45.471Z] =================================================================================================================== 00:34:59.893 [2024-10-12T23:45:45.471Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:59.893 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1754103 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1754511 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1754511 /var/tmp/bperf.sock 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1754511 ']' 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:00.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:00.151 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:00.151 [2024-10-13 01:45:45.696760] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:00.151 [2024-10-13 01:45:45.696854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754511 ] 00:35:00.409 [2024-10-13 01:45:45.758444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:00.409 [2024-10-13 01:45:45.809721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.409 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:00.409 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:00.409 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:00.409 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:00.409 01:45:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:00.975 01:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.975 01:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:01.233 nvme0n1 00:35:01.491 01:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:01.491 01:45:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:01.491 Running I/O for 2 seconds... 00:35:03.797 19454.00 IOPS, 75.99 MiB/s [2024-10-12T23:45:49.375Z] 18955.00 IOPS, 74.04 MiB/s 00:35:03.797 Latency(us) 00:35:03.797 [2024-10-12T23:45:49.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.797 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:03.797 nvme0n1 : 2.01 18954.95 74.04 0.00 0.00 6737.03 2669.99 11893.57 00:35:03.797 [2024-10-12T23:45:49.375Z] =================================================================================================================== 00:35:03.797 [2024-10-12T23:45:49.375Z] Total : 18954.95 74.04 0.00 0.00 6737.03 2669.99 11893.57 00:35:03.797 { 00:35:03.797 "results": [ 00:35:03.797 { 00:35:03.797 "job": "nvme0n1", 00:35:03.797 "core_mask": "0x2", 00:35:03.797 "workload": "randwrite", 00:35:03.797 "status": "finished", 00:35:03.797 "queue_depth": 128, 00:35:03.797 "io_size": 4096, 00:35:03.797 "runtime": 2.008868, 00:35:03.797 "iops": 18954.953735138395, 00:35:03.797 "mibps": 74.04278802788436, 00:35:03.797 "io_failed": 0, 00:35:03.797 "io_timeout": 0, 00:35:03.797 "avg_latency_us": 6737.034550445188, 00:35:03.797 "min_latency_us": 2669.9851851851854, 00:35:03.797 "max_latency_us": 11893.570370370371 00:35:03.797 } 00:35:03.797 ], 00:35:03.797 "core_count": 1 00:35:03.797 } 00:35:03.797 01:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:03.797 01:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:03.797 01:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:03.797 01:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:03.797 | select(.opcode=="crc32c") 00:35:03.797 | "\(.module_name) \(.executed)"' 00:35:03.797 01:45:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1754511 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1754511 ']' 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1754511 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1754511 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1754511' 00:35:03.797 killing process with pid 1754511 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1754511 00:35:03.797 Received shutdown signal, test time was about 2.000000 seconds 00:35:03.797 00:35:03.797 Latency(us) 00:35:03.797 [2024-10-12T23:45:49.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.797 [2024-10-12T23:45:49.375Z] =================================================================================================================== 00:35:03.797 [2024-10-12T23:45:49.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:03.797 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1754511 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1754921 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1754921 /var/tmp/bperf.sock 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1754921 ']' 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:04.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:04.056 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:04.056 [2024-10-13 01:45:49.531229] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:04.056 [2024-10-13 01:45:49.531319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754921 ] 00:35:04.056 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:04.056 Zero copy mechanism will not be used. 00:35:04.056 [2024-10-13 01:45:49.596671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.314 [2024-10-13 01:45:49.653080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.314 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:04.314 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:35:04.314 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:04.314 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:04.314 01:45:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:04.879 01:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:04.879 01:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:05.137 nvme0n1 00:35:05.137 01:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:05.137 01:45:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:05.395 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:05.395 Zero copy mechanism will not be used. 00:35:05.395 Running I/O for 2 seconds... 00:35:07.263 5682.00 IOPS, 710.25 MiB/s [2024-10-12T23:45:52.841Z] 5584.50 IOPS, 698.06 MiB/s 00:35:07.263 Latency(us) 00:35:07.263 [2024-10-12T23:45:52.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.263 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:07.263 nvme0n1 : 2.00 5581.10 697.64 0.00 0.00 2859.38 2099.58 7767.23 00:35:07.263 [2024-10-12T23:45:52.841Z] =================================================================================================================== 00:35:07.263 [2024-10-12T23:45:52.841Z] Total : 5581.10 697.64 0.00 0.00 2859.38 2099.58 7767.23 00:35:07.263 { 00:35:07.263 "results": [ 00:35:07.263 { 00:35:07.263 "job": "nvme0n1", 00:35:07.263 "core_mask": "0x2", 00:35:07.263 "workload": "randwrite", 00:35:07.263 "status": "finished", 00:35:07.263 "queue_depth": 16, 00:35:07.263 "io_size": 131072, 00:35:07.263 "runtime": 2.004622, 00:35:07.263 "iops": 5581.102073109045, 00:35:07.263 "mibps": 697.6377591386306, 00:35:07.263 "io_failed": 0, 00:35:07.263 "io_timeout": 0, 00:35:07.263 "avg_latency_us": 2859.382133767661, 00:35:07.263 "min_latency_us": 2099.5792592592593, 00:35:07.263 "max_latency_us": 7767.22962962963 00:35:07.263 } 00:35:07.263 ], 00:35:07.263 "core_count": 1 00:35:07.263 } 00:35:07.263 01:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:07.263 01:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:07.263 01:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:07.263 01:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:07.263 01:45:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:07.263 | select(.opcode=="crc32c") 00:35:07.263 | "\(.module_name) \(.executed)"' 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1754921 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1754921 ']' 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1754921 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1754921 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1754921' 00:35:07.521 killing process with pid 1754921 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1754921 00:35:07.521 Received shutdown signal, test time was about 2.000000 seconds 00:35:07.521 00:35:07.521 Latency(us) 00:35:07.521 [2024-10-12T23:45:53.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.521 [2024-10-12T23:45:53.099Z] =================================================================================================================== 00:35:07.521 [2024-10-12T23:45:53.099Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:07.521 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1754921 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1753551 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1753551 ']' 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1753551 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1753551 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1753551' 00:35:07.780 killing process with pid 1753551 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1753551 00:35:07.780 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1753551 00:35:08.038 00:35:08.038 real 0m15.834s 00:35:08.038 user 0m31.591s 00:35:08.038 sys 0m4.294s 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:08.038 ************************************ 00:35:08.038 END TEST nvmf_digest_clean 00:35:08.038 ************************************ 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:08.038 ************************************ 00:35:08.038 START TEST nvmf_digest_error 00:35:08.038 ************************************ 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=1755472 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 1755472 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1755472 ']' 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.038 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.038 [2024-10-13 01:45:53.602425] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:08.038 [2024-10-13 01:45:53.602547] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.297 [2024-10-13 01:45:53.671874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.297 [2024-10-13 01:45:53.716890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.297 [2024-10-13 01:45:53.716960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.297 [2024-10-13 01:45:53.716986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.297 [2024-10-13 01:45:53.717000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.297 [2024-10-13 01:45:53.717012] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.297 [2024-10-13 01:45:53.717660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.297 [2024-10-13 01:45:53.858453] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.297 01:45:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.555 null0 00:35:08.555 [2024-10-13 01:45:53.979603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:08.555 [2024-10-13 01:45:54.003836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1755502 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1755502 /var/tmp/bperf.sock 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1755502 ']' 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.555 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.555 [2024-10-13 01:45:54.055821] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:08.555 [2024-10-13 01:45:54.055898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755502 ] 00:35:08.555 [2024-10-13 01:45:54.122063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.814 [2024-10-13 01:45:54.173348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.814 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.814 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:08.814 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:08.814 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:09.072 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:09.072 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.072 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.072 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.072 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.072 01:45:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.638 nvme0n1 00:35:09.638 01:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:09.638 01:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:09.638 01:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.638 01:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:09.638 01:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:09.638 01:45:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.638 Running I/O for 2 seconds... 00:35:09.638 [2024-10-13 01:45:55.152692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.638 [2024-10-13 01:45:55.152749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-10-13 01:45:55.152786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.638 [2024-10-13 01:45:55.168842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.638 [2024-10-13 01:45:55.168880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-10-13 01:45:55.168900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.638 [2024-10-13 01:45:55.184683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.638 [2024-10-13 01:45:55.184716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-10-13 01:45:55.184735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.638 [2024-10-13 01:45:55.197339] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.638 [2024-10-13 01:45:55.197375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-10-13 01:45:55.197394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.638 [2024-10-13 01:45:55.214393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.638 [2024-10-13 01:45:55.214431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.638 [2024-10-13 01:45:55.214451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.230312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.230347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.230367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.245258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.245303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.245324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.257625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.257655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.257672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.274441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.274482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.274517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.290349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.290385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.290405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.305177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.305212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.305232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.320914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.320950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.320969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.334847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.334883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.334903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.348958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.348993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.349012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.364130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.364166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.364185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.380204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.380238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.380257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.394726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.394755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.394772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.409607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.409653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.409670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.425074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.897 [2024-10-13 01:45:55.425110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.897 [2024-10-13 01:45:55.425130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.897 [2024-10-13 01:45:55.439600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.898 [2024-10-13 01:45:55.439629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-10-13 01:45:55.439644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.898 [2024-10-13 01:45:55.454499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.898 [2024-10-13 01:45:55.454548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-10-13 01:45:55.454566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:09.898 [2024-10-13 01:45:55.466184] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:09.898 [2024-10-13 01:45:55.466218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.898 [2024-10-13 01:45:55.466238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.482169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.482204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.482223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.498666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.498696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.498719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.513428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.513463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.513492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.528110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.528144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.528163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.542833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.542868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.542886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.557527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.557558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.557573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.572263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.572299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.572318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.586947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.586981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.587000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.601977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.602013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.602032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.616745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.616784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.616817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.631690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.631720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.631737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.646442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.646486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.646508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.661118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.661157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.661176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.675637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.675666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.675697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.691155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.691189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.691208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.707482] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.156 [2024-10-13 01:45:55.707531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.156 [2024-10-13 01:45:55.707548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.156 [2024-10-13 01:45:55.721163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.157 [2024-10-13 01:45:55.721199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.157 [2024-10-13 01:45:55.721225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.736209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.736244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.736263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.751110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.751146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.751171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.765596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.765624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.765640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.781631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.781664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.781681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.794631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.794660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.794675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.810964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.810999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.811019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.825003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.825039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.825058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.840287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.840323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.840343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.855327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.855362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.855382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.871609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.871640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.871657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.884450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.884500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.884533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.900022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.900058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.900077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.915926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.915962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.915982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.927942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.927977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.927996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.944756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.944787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.944820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.960612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.960643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.960661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.415 [2024-10-13 01:45:55.975278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.415 [2024-10-13 01:45:55.975315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.415 [2024-10-13 01:45:55.975335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.416 [2024-10-13 01:45:55.990105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.416 [2024-10-13 01:45:55.990140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.416 [2024-10-13 01:45:55.990160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.004842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.004872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.004906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.020708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.020740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.020774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.034483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.034532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.034550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.048289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.048324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.048342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.064234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.064269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.064288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.079659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.079705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.079722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.094237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.094273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.094291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.109090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.109126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.109145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.125107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.125141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.125159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 17005.00 IOPS, 66.43 MiB/s [2024-10-12T23:45:56.252Z] [2024-10-13 01:45:56.139575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.139613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.139632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.155335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.155368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.155387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.171406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.171436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.171454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.185355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.185385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.185402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.197303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.197331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.197346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.211389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.211420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.211453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.226819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.226850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.226868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.674 [2024-10-13 01:45:56.239243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.674 [2024-10-13 01:45:56.239272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.674 [2024-10-13 01:45:56.239288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.252835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.252865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.252896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.266667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.266713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.266729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.280477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.280508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.280524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.296056] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.296087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.296104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.309699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.309730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.309747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.323565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.323597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.323615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.337282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.337313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.337329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.351026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.351056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.351073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.364663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.364694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.364711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.378367] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.378397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.378422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.391992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.392023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.392040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.405590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.405623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.405640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.419183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.419215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.419232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.433247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.433279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.433295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.447038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.447070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.447087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.460728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.460760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.460777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.475449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.475500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.475517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.489565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.489596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.933 [2024-10-13 01:45:56.489613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:10.933 [2024-10-13 01:45:56.503258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:10.933 [2024-10-13 01:45:56.503298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.934 [2024-10-13 01:45:56.503316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.514927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.514957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.514974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.528988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.529018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.529033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.543843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.543873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.543889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.557658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.557690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.557707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.572754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.572800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.572818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.585020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.585051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.585068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.599138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.599169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.599186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.613700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.613732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.613748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.627617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.627648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.627664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.641306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.641334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.192 [2024-10-13 01:45:56.641349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.192 [2024-10-13 01:45:56.656811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.192 [2024-10-13 01:45:56.656843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.193 [2024-10-13 01:45:56.656860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.193 [2024-10-13 01:45:56.671215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.193 [2024-10-13 01:45:56.671246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.193 [2024-10-13 01:45:56.671262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.193 [2024-10-13 01:45:56.682834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.193 [2024-10-13 01:45:56.682864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.193 [2024-10-13 01:45:56.682879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.193 [2024-10-13 01:45:56.698216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.193 [2024-10-13 01:45:56.698245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.193 [2024-10-13 01:45:56.698261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.193 [2024-10-13 01:45:56.711606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.193 [2024-10-13 01:45:56.711637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.193 [2024-10-13 01:45:56.711653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.193 [2024-10-13 01:45:56.724541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.193 [2024-10-13 01:45:56.724572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.193 [2024-10-13 01:45:56.724590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.193 [2024-10-13 01:45:56.738888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.193 [2024-10-13 01:45:56.738929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.193 [2024-10-13 01:45:56.738946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.193 [2024-10-13 01:45:56.753644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.193 [2024-10-13 01:45:56.753676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.193 [2024-10-13 01:45:56.753693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.193 [2024-10-13 01:45:56.767144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.193 [2024-10-13 01:45:56.767189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.193 [2024-10-13 01:45:56.767206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.451 [2024-10-13 01:45:56.781009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.451 [2024-10-13 01:45:56.781040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.451 [2024-10-13 01:45:56.781056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.451 [2024-10-13 01:45:56.794599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.451 [2024-10-13 01:45:56.794630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.451 [2024-10-13 01:45:56.794646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.451 [2024-10-13 01:45:56.808362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.451 [2024-10-13 01:45:56.808391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.451 [2024-10-13 01:45:56.808407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.451 [2024-10-13 01:45:56.822098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.451 [2024-10-13 01:45:56.822127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.451 [2024-10-13 01:45:56.822143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.451 [2024-10-13 01:45:56.835697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.451 [2024-10-13 01:45:56.835727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.451 [2024-10-13 01:45:56.835743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.451 [2024-10-13 01:45:56.849274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.451 [2024-10-13 01:45:56.849303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.451 [2024-10-13 01:45:56.849319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.451 [2024-10-13 01:45:56.863050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.451 [2024-10-13 01:45:56.863080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.451 [2024-10-13 01:45:56.863095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.451 [2024-10-13 01:45:56.877775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.451 [2024-10-13 01:45:56.877807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.451 [2024-10-13 01:45:56.877839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.451 [2024-10-13 01:45:56.891465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.451 [2024-10-13 01:45:56.891515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.451 [2024-10-13 01:45:56.891536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:56.905109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:56.905138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:56.905155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:56.918841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:56.918870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:56.918885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:56.932357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:56.932387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:56.932402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:56.946039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:56.946068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:56.946083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:56.959702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:56.959732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:56.959748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:56.974659] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:56.974689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:56.974715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:56.987948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:56.987977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:56.987993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:57.001357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:57.001389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:57.001405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:57.015851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:57.015882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:57.015899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.452 [2024-10-13 01:45:57.027140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.452 [2024-10-13 01:45:57.027168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.452 [2024-10-13 01:45:57.027184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.710 [2024-10-13 01:45:57.040627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.710 [2024-10-13 01:45:57.040657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.710 [2024-10-13 01:45:57.040673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.710 [2024-10-13 01:45:57.054210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.710 [2024-10-13 01:45:57.054239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.710 [2024-10-13 01:45:57.054254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.710 [2024-10-13 01:45:57.067584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.710 [2024-10-13 01:45:57.067614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.710 [2024-10-13 01:45:57.067631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.710 [2024-10-13 01:45:57.082041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.710 [2024-10-13 01:45:57.082070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.710 [2024-10-13 01:45:57.082086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.710 [2024-10-13 01:45:57.095496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.710 [2024-10-13 01:45:57.095532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.710 [2024-10-13 01:45:57.095548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.710 [2024-10-13 01:45:57.108935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.710 [2024-10-13 01:45:57.108964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.710 [2024-10-13 01:45:57.108980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.710 [2024-10-13 01:45:57.122370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.710 [2024-10-13 01:45:57.122398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.710 [2024-10-13 01:45:57.122414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.710 [2024-10-13 01:45:57.135878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21872d0) 00:35:11.710 [2024-10-13 01:45:57.135906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.710 [2024-10-13 01:45:57.135922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:11.710 17732.00 IOPS, 69.27 MiB/s 00:35:11.710 Latency(us) 00:35:11.710 [2024-10-12T23:45:57.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.710 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:11.710 nvme0n1 : 2.00 17755.03 69.36 0.00 0.00 7200.60 3592.34 20194.80 00:35:11.710 [2024-10-12T23:45:57.288Z] =================================================================================================================== 00:35:11.710 [2024-10-12T23:45:57.288Z] Total : 17755.03 69.36 0.00 0.00 7200.60 3592.34 20194.80 00:35:11.710 { 00:35:11.710 "results": [ 00:35:11.710 { 00:35:11.710 "job": "nvme0n1", 00:35:11.710 "core_mask": "0x2", 00:35:11.710 "workload": "randread", 00:35:11.710 "status": "finished", 00:35:11.710 "queue_depth": 128, 00:35:11.710 "io_size": 4096, 00:35:11.710 "runtime": 2.004615, 00:35:11.710 "iops": 17755.03026765738, 00:35:11.710 "mibps": 69.35558698303664, 00:35:11.710 "io_failed": 0, 00:35:11.710 "io_timeout": 0, 00:35:11.710 "avg_latency_us": 7200.604532354338, 00:35:11.710 "min_latency_us": 3592.343703703704, 00:35:11.710 "max_latency_us": 20194.79703703704 00:35:11.710 } 00:35:11.710 ], 00:35:11.710 "core_count": 1 00:35:11.710 } 00:35:11.710 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:11.710 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:11.710 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:11.710 | .driver_specific 00:35:11.710 | .nvme_error 00:35:11.710 | .status_code 00:35:11.710 | .command_transient_transport_error' 00:35:11.710 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1755502 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1755502 ']' 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1755502 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1755502 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1755502' 00:35:11.968 killing process with pid 1755502 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1755502 00:35:11.968 Received shutdown signal, test time was about 2.000000 seconds 00:35:11.968 00:35:11.968 Latency(us) 00:35:11.968 [2024-10-12T23:45:57.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.968 [2024-10-12T23:45:57.546Z] =================================================================================================================== 00:35:11.968 [2024-10-12T23:45:57.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:11.968 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1755502 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1755904 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1755904 /var/tmp/bperf.sock 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1755904 ']' 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:12.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:12.226 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.226 [2024-10-13 01:45:57.718318] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:12.226 [2024-10-13 01:45:57.718415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1755904 ] 00:35:12.226 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:12.226 Zero copy mechanism will not be used. 00:35:12.226 [2024-10-13 01:45:57.781433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.484 [2024-10-13 01:45:57.830947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.484 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:12.484 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:12.484 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:12.484 01:45:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:12.741 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:12.741 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.741 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.741 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.741 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.741 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:13.308 nvme0n1 00:35:13.308 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:13.308 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:13.308 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:13.308 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:13.308 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:13.308 01:45:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:13.308 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:13.308 Zero copy mechanism will not be used. 00:35:13.308 Running I/O for 2 seconds... 00:35:13.308 [2024-10-13 01:45:58.735463] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.308 [2024-10-13 01:45:58.735550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-10-13 01:45:58.735573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.308 [2024-10-13 01:45:58.742368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.308 [2024-10-13 01:45:58.742406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-10-13 01:45:58.742429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.308 [2024-10-13 01:45:58.748412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.308 [2024-10-13 01:45:58.748450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-10-13 01:45:58.748482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.308 [2024-10-13 01:45:58.754189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.308 [2024-10-13 01:45:58.754227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.308 [2024-10-13 01:45:58.754262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.308 [2024-10-13 01:45:58.760038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.760075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.760095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.765881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.765917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.765941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.771906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.771943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.771963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.778574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.778609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.778638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.783601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.783632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.783650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.790226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.790265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.790300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.795192] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.795228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.795248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.801811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.801861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.801881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.806830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.806876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.806897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.812644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.812674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.812692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.818271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.818308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.818328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.823948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.823985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.824011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.829750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.829801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.829821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.835369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.835406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.835426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.841045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.841081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.841101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.846825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.846861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.846888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.853438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.853493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.853516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.858352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.858390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.858412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.864008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.864053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.864073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.869820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.869860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.869881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.875629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.875662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.875679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.309 [2024-10-13 01:45:58.881268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.309 [2024-10-13 01:45:58.881317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.309 [2024-10-13 01:45:58.881337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.887073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.887113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.887136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.893012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.893050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.893077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.898758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.898797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.898831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.905304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.905362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.905405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.910436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.910480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.910502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.916299] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.916337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.916357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.922798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.922857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.922892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.927911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.927949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.927969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.933885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.933932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.933953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.939772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.939820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.939852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.945726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.945773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.945795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.951497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.951545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.951564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.957190] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.957234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.957258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.963014] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.963062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.963082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.968880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.968917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.968938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.974858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.974894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.974916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.980675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.980707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.980724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.986547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.986579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.986596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.992460] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.992519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.992539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:58.998366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:58.998404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:58.998424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:59.004179] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:59.004216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:59.004237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:59.009862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:59.009899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:59.009925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:59.015784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:59.015817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:59.015858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:59.021796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:59.021833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:59.021854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:59.027643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:59.027677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:59.027696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:59.033403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:59.033440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:59.033478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:59.039216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:59.039253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.606 [2024-10-13 01:45:59.039274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.606 [2024-10-13 01:45:59.045703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.606 [2024-10-13 01:45:59.045743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.045781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.051098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.051137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.051176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.056286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.056330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.056353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.060224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.060261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.060281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.065564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.065598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.065622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.071467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.071529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.071548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.077213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.077251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.077271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.083008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.083045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.083065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.088878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.088916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.088936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.094722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.094764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.094797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.100683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.100730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.100749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.106650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.106682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.106712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.112718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.112761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.112778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.118722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.118772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.118792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.124708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.124771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.124796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.130674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.130709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.130727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.136461] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.136520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.136538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.142259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.142296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.142317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.148039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.148076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.148101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.153940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.153978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.154006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.607 [2024-10-13 01:45:59.159721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.607 [2024-10-13 01:45:59.159766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.607 [2024-10-13 01:45:59.159800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.890 [2024-10-13 01:45:59.165668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.890 [2024-10-13 01:45:59.165702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.890 [2024-10-13 01:45:59.165722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.890 [2024-10-13 01:45:59.171565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.890 [2024-10-13 01:45:59.171601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.890 [2024-10-13 01:45:59.171619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.890 [2024-10-13 01:45:59.177537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.890 [2024-10-13 01:45:59.177587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.890 [2024-10-13 01:45:59.177604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.890 [2024-10-13 01:45:59.182957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.890 [2024-10-13 01:45:59.182996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.890 [2024-10-13 01:45:59.183025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.890 [2024-10-13 01:45:59.188771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.890 [2024-10-13 01:45:59.188805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.890 [2024-10-13 01:45:59.188824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.890 [2024-10-13 01:45:59.194250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.890 [2024-10-13 01:45:59.194289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.890 [2024-10-13 01:45:59.194315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.890 [2024-10-13 01:45:59.200137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.890 [2024-10-13 01:45:59.200174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.890 [2024-10-13 01:45:59.200198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.206253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.206297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.206317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.212165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.212203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.212223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.217973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.218010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.218029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.223839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.223875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.223896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.229640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.229688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.229711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.235555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.235603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.235621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.241464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.241528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.241547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.247483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.247529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.247549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.253289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.253331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.253357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.259411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.259448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.259467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.265273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.265313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.265333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.271105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.271142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.271162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.276842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.276879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.276898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.282623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.282655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.282673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.288455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.288505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.288548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.294330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.294366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.294387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.300128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.300164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.300184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.306284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.306321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.306350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.312267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.312304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.312324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.318077] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.318114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.318134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.323974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.324011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.324031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.329849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.329886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.329907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.336030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.336067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.336087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.342261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.342298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.342318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.348023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.348065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.348086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.353743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.353791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.353810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.359869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.359912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.359933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.364165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.364213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.891 [2024-10-13 01:45:59.364245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.891 [2024-10-13 01:45:59.369149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.891 [2024-10-13 01:45:59.369187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.369207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.374863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.374900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.374920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.380733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.380763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.380796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.386750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.386800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.386825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.392648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.392680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.392698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.398596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.398628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.398659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.404526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.404557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.404575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.410502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.410550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.410567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.416356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.416393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.416413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.422106] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.422143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.422163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.427961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.427999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.428018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.433760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.433807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.433825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.439689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.439721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.439740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.445694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.445728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.445747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.451578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.451610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.451628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.457438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.457484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.457528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:13.892 [2024-10-13 01:45:59.463441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:13.892 [2024-10-13 01:45:59.463486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:13.892 [2024-10-13 01:45:59.463533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.469321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.469359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.469379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.475110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.475147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.475168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.480967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.481005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.481025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.486796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.486848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.486868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.492646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.492678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.492696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.498606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.498641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.498658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.504616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.504669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.504700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.510459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.510527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.510547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.517977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.518015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.518035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.525282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.525319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.525339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.531960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.531998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.532018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.538532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.538566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.151 [2024-10-13 01:45:59.538584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.151 [2024-10-13 01:45:59.544713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.151 [2024-10-13 01:45:59.544748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.544766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.550563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.550598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.550617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.557421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.557459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.557490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.563662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.563696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.563715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.569741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.569792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.569812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.575961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.575999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.576019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.582530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.582563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.582596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.588629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.588662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.588696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.594791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.594822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.594858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.601088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.601126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.601147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.608035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.608073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.608093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.615966] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.616004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.616023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.624434] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.624480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.624524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.632627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.632676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.632694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.639178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.639216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.639237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.645332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.645369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.645389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.651121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.651158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.651178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.656971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.657010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.657030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.662701] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.662735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.662755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.668588] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.668624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.668658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.674479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.674532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.674550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.680392] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.680430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.680450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.686444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.686490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.686526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.692456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.692502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.692523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.698547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.698580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.698597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.704352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.704390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.704410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.710155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.710192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.710213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.715945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.715982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.716002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.152 [2024-10-13 01:45:59.721689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.152 [2024-10-13 01:45:59.721724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.152 [2024-10-13 01:45:59.721743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.153 [2024-10-13 01:45:59.727420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.153 [2024-10-13 01:45:59.727457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.153 [2024-10-13 01:45:59.727492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.412 5220.00 IOPS, 652.50 MiB/s [2024-10-12T23:45:59.990Z] [2024-10-13 01:45:59.735186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.735224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.735243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.741258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.741296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.741316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.747131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.747169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.747188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.752990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.753028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.753048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.758872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.758910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.758930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.764843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.764881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.764901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.770727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.770779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.770799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.776622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.776656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.776675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.782533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.782573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.782592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.788408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.788445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.412 [2024-10-13 01:45:59.788466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.412 [2024-10-13 01:45:59.794308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.412 [2024-10-13 01:45:59.794345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.794364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.800096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.800134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.800154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.805813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.805850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.805870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.811545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.811577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.811593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.817464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.817523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.817542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.823381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.823419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.823439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.829152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.829190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.829210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.835003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.835041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.835061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.840809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.840847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.840867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.846632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.846666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.846685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.852448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.852494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.852514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.858317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.858354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.858374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.864105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.864143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.864163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.870068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.870105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.870125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.875858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.875897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.875917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.881705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.881738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.881778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.887634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.887668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.887686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.893548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.893582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.893600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.899543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.899577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.899594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.905405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.905443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.905463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.911204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.911241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.911261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.917047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.917085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.917105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.923102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.923139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.923160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.929514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.929548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.929566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.935541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.935580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.935599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.941535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.941568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.941586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.947521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.947553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.947586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.953524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.953556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.953587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.413 [2024-10-13 01:45:59.959243] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.413 [2024-10-13 01:45:59.959281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.413 [2024-10-13 01:45:59.959301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.414 [2024-10-13 01:45:59.965038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.414 [2024-10-13 01:45:59.965079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.414 [2024-10-13 01:45:59.965099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.414 [2024-10-13 01:45:59.970824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.414 [2024-10-13 01:45:59.970861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.414 [2024-10-13 01:45:59.970881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.414 [2024-10-13 01:45:59.976531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.414 [2024-10-13 01:45:59.976567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.414 [2024-10-13 01:45:59.976585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.414 [2024-10-13 01:45:59.982388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.414 [2024-10-13 01:45:59.982427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.414 [2024-10-13 01:45:59.982448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.414 [2024-10-13 01:45:59.988236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.414 [2024-10-13 01:45:59.988275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.414 [2024-10-13 01:45:59.988295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:45:59.994057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:45:59.994095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:45:59.994116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:45:59.999809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:45:59.999856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:45:59.999872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.005587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.005628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.005650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.011457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.011535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.011555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.017307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.017354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.017376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.023221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.023261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.023283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.028960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.029003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.029024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.034949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.034993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.035026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.042572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.042609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.042627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.050371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.050409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.050430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.057532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.057572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.057591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.065536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.065571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.065590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.673 [2024-10-13 01:46:00.073398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.673 [2024-10-13 01:46:00.073437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.673 [2024-10-13 01:46:00.073457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.081265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.081304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.081324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.089195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.089234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.089254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.095745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.095794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.095811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.102084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.102122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.102142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.108543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.108578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.108595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.115053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.115090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.115111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.119666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.119709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.119731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.124416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.124454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.124483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.130328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.130366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.130387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.136139] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.136177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.136197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.141968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.142026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.147706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.147739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.147780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.153637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.153671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.153689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.159565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.159598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.159616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.165443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.165507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.165527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.171724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.171774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.171792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.177760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.177794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.177828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.183905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.183942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.183963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.189788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.189819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.189853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.195761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.195796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.195830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.201875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.201919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.201940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.208068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.208105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.208126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.212189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.212238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.212267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.217271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.217309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.217329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.223182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.223226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.223248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.227325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.227371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.227393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.232343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.232380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.232400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.238079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.674 [2024-10-13 01:46:00.238117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.674 [2024-10-13 01:46:00.238138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.674 [2024-10-13 01:46:00.243458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.675 [2024-10-13 01:46:00.243508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.675 [2024-10-13 01:46:00.243548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.675 [2024-10-13 01:46:00.247265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.675 [2024-10-13 01:46:00.247308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.675 [2024-10-13 01:46:00.247329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.253000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.253038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.253058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.258984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.259022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.259043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.264904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.264942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.264962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.270797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.270855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.270882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.276337] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.276385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.276419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.280303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.280344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.280366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.285360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.285397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.285417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.291202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.291240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.291267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.297680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.297713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.297730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.303744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.303797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.303815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.309727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.309776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.309794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.315717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.315751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.315784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.321664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.321698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.321716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.933 [2024-10-13 01:46:00.327478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.933 [2024-10-13 01:46:00.327516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.933 [2024-10-13 01:46:00.327550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.333258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.333295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.333315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.339029] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.339072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.339092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.344737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.344794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.344812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.350725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.350759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.350778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.356481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.356544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.356564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.362359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.362397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.362416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.368214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.368252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.368273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.374123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.374160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.374181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.379973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.380011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.380032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.385896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.385933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.385954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.391695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.391728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.391746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.397505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.397553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.397570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.403532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.403566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.403583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.409363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.409401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.409421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.415093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.415130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.415150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.420917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.420955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.420975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.426781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.426818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.426838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.432624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.432674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.432691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.438427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.438464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.438495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.444261] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.444299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.444325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.450083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.450120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.450141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.455855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.455892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.455912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.461634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.461668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.461687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.467523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.467557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.467575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.473162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.473201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.473220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.478938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.478976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.478996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.484820] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.484873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.484893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.490627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.490661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.490680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.496492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.496540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.496558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.502324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.502359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.502379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.934 [2024-10-13 01:46:00.508291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:14.934 [2024-10-13 01:46:00.508329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:14.934 [2024-10-13 01:46:00.508349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.193 [2024-10-13 01:46:00.514156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.193 [2024-10-13 01:46:00.514193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.193 [2024-10-13 01:46:00.514213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.193 [2024-10-13 01:46:00.520076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.193 [2024-10-13 01:46:00.520114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.193 [2024-10-13 01:46:00.520134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.193 [2024-10-13 01:46:00.525896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.193 [2024-10-13 01:46:00.525933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.193 [2024-10-13 01:46:00.525954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.193 [2024-10-13 01:46:00.531768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.193 [2024-10-13 01:46:00.531803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.193 [2024-10-13 01:46:00.531837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.193 [2024-10-13 01:46:00.537597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.193 [2024-10-13 01:46:00.537645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.193 [2024-10-13 01:46:00.537662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.193 [2024-10-13 01:46:00.543393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.193 [2024-10-13 01:46:00.543432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.193 [2024-10-13 01:46:00.543458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.193 [2024-10-13 01:46:00.549031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.193 [2024-10-13 01:46:00.549069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.193 [2024-10-13 01:46:00.549089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.193 [2024-10-13 01:46:00.554783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.193 [2024-10-13 01:46:00.554833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.193 [2024-10-13 01:46:00.554854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.193 [2024-10-13 01:46:00.560660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.193 [2024-10-13 01:46:00.560693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.193 [2024-10-13 01:46:00.560710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.566545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.566579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.566597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.572407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.572445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.572465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.578361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.578399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.578419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.584147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.584185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.584205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.589930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.589966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.589985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.595736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.595794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.595828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.601625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.601673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.601690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.607516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.607566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.607584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.613388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.613424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.613444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.619182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.619219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.619240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.624885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.624922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.624941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.630700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.630745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.630762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.636581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.636628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.636645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.642561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.642609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.642627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.648436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.648483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.648520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.654277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.654315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.654335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.660100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.660137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.660157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.665863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.665900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.665920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.671676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.671709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.671727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.677551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.677585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.677603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.683408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.683445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.683465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.689186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.689223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.689243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.694890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.694928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.694955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.700600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.700634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.700651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.706057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.706090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.706107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.711503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.711536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.711554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.717116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.717153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.717172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.722835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.722873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.722893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:15.194 [2024-10-13 01:46:00.728603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.194 [2024-10-13 01:46:00.728636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.194 [2024-10-13 01:46:00.728654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:15.195 5242.50 IOPS, 655.31 MiB/s [2024-10-12T23:46:00.773Z] [2024-10-13 01:46:00.735689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x835830) 00:35:15.195 [2024-10-13 01:46:00.735722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.195 [2024-10-13 01:46:00.735740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:15.195 00:35:15.195 Latency(us) 00:35:15.195 [2024-10-12T23:46:00.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.195 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:15.195 nvme0n1 : 2.01 5244.31 655.54 0.00 0.00 3045.42 837.40 8641.04 00:35:15.195 [2024-10-12T23:46:00.773Z] =================================================================================================================== 00:35:15.195 [2024-10-12T23:46:00.773Z] Total : 5244.31 655.54 0.00 0.00 3045.42 837.40 8641.04 00:35:15.195 { 00:35:15.195 "results": [ 00:35:15.195 { 00:35:15.195 "job": "nvme0n1", 00:35:15.195 "core_mask": "0x2", 00:35:15.195 "workload": "randread", 00:35:15.195 "status": "finished", 00:35:15.195 "queue_depth": 16, 00:35:15.195 "io_size": 131072, 00:35:15.195 "runtime": 2.00503, 00:35:15.195 "iops": 5244.310558944255, 00:35:15.195 "mibps": 655.5388198680319, 00:35:15.195 "io_failed": 0, 00:35:15.195 "io_timeout": 0, 00:35:15.195 "avg_latency_us": 3045.418019689685, 00:35:15.195 "min_latency_us": 837.4044444444445, 00:35:15.195 "max_latency_us": 8641.042962962963 00:35:15.195 } 00:35:15.195 ], 00:35:15.195 "core_count": 1 00:35:15.195 } 00:35:15.195 01:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:15.195 01:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:15.195 01:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:15.195 01:46:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:15.195 | .driver_specific 00:35:15.195 | .nvme_error 00:35:15.195 | .status_code 00:35:15.195 | .command_transient_transport_error' 00:35:15.453 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 339 > 0 )) 00:35:15.453 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1755904 00:35:15.453 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1755904 ']' 00:35:15.453 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1755904 00:35:15.453 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:15.453 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:15.453 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1755904 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1755904' 00:35:15.711 killing process with pid 1755904 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1755904 00:35:15.711 Received shutdown signal, test time was about 2.000000 seconds 00:35:15.711 00:35:15.711 Latency(us) 00:35:15.711 [2024-10-12T23:46:01.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.711 [2024-10-12T23:46:01.289Z] =================================================================================================================== 00:35:15.711 [2024-10-12T23:46:01.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1755904 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1756404 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1756404 /var/tmp/bperf.sock 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1756404 ']' 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:15.711 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.712 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:15.712 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.712 [2024-10-13 01:46:01.280708] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:15.712 [2024-10-13 01:46:01.280815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756404 ] 00:35:15.969 [2024-10-13 01:46:01.339915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.969 [2024-10-13 01:46:01.386781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.970 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:15.970 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:15.970 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.970 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:16.228 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:16.228 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.228 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.228 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.228 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.228 01:46:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:16.792 nvme0n1 00:35:16.792 01:46:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:16.792 01:46:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.792 01:46:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:16.792 01:46:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:16.792 01:46:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:16.792 01:46:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:16.792 Running I/O for 2 seconds... 00:35:16.792 [2024-10-13 01:46:02.291468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166df988 00:35:16.792 [2024-10-13 01:46:02.293038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.792 [2024-10-13 01:46:02.293103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:16.792 [2024-10-13 01:46:02.305546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166fda78 00:35:16.793 [2024-10-13 01:46:02.307270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.793 [2024-10-13 01:46:02.307304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:16.793 [2024-10-13 01:46:02.319615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f4b08 00:35:16.793 [2024-10-13 01:46:02.321494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.793 [2024-10-13 01:46:02.321544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:16.793 [2024-10-13 01:46:02.333619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e95a0 00:35:16.793 [2024-10-13 01:46:02.335769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.793 [2024-10-13 01:46:02.335811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:16.793 [2024-10-13 01:46:02.343213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e7c50 00:35:16.793 [2024-10-13 01:46:02.344221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.793 [2024-10-13 01:46:02.344254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:16.793 [2024-10-13 01:46:02.356976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ebb98 00:35:16.793 [2024-10-13 01:46:02.358171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:16.793 [2024-10-13 01:46:02.358204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:16.793 [2024-10-13 01:46:02.370134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ed920 00:35:17.051 [2024-10-13 01:46:02.371353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.051 [2024-10-13 01:46:02.371385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:17.051 [2024-10-13 01:46:02.383076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e0ea0 00:35:17.051 [2024-10-13 01:46:02.383769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.051 [2024-10-13 01:46:02.383816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:17.051 [2024-10-13 01:46:02.398073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166df550 00:35:17.051 [2024-10-13 01:46:02.399580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.051 [2024-10-13 01:46:02.399624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:17.051 [2024-10-13 01:46:02.409072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f7538 00:35:17.052 [2024-10-13 01:46:02.409738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.409765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.422743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e3498 00:35:17.052 [2024-10-13 01:46:02.423652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.423696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.438300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e84c0 00:35:17.052 [2024-10-13 01:46:02.440295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.440327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.447491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e3060 00:35:17.052 [2024-10-13 01:46:02.448290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.448321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.460299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e3498 00:35:17.052 [2024-10-13 01:46:02.461266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.461298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.473992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f5be8 00:35:17.052 [2024-10-13 01:46:02.475133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.475166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.489833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ee190 00:35:17.052 [2024-10-13 01:46:02.491773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.491819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.503047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e7818 00:35:17.052 [2024-10-13 01:46:02.504906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.504938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.511956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e0ea0 00:35:17.052 [2024-10-13 01:46:02.512798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.512843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.525226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166de038 00:35:17.052 [2024-10-13 01:46:02.526081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.526113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.540580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166fe2e8 00:35:17.052 [2024-10-13 01:46:02.542109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.542142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.554210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f4b08 00:35:17.052 [2024-10-13 01:46:02.555896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.555929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.565901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ebb98 00:35:17.052 [2024-10-13 01:46:02.567720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.567749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.577068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e95a0 00:35:17.052 [2024-10-13 01:46:02.577932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.577964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.592918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f7100 00:35:17.052 [2024-10-13 01:46:02.594531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.594559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.605156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f9b30 00:35:17.052 [2024-10-13 01:46:02.606408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.606439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:17.052 [2024-10-13 01:46:02.618147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f6020 00:35:17.052 [2024-10-13 01:46:02.619272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.052 [2024-10-13 01:46:02.619303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.630970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f57b0 00:35:17.311 [2024-10-13 01:46:02.632233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.632271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.644092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e73e0 00:35:17.311 [2024-10-13 01:46:02.645320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.645351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.656806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ea680 00:35:17.311 [2024-10-13 01:46:02.657651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.657681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.670532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f4f40 00:35:17.311 [2024-10-13 01:46:02.671491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.671529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.683242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ddc00 00:35:17.311 [2024-10-13 01:46:02.684486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.684517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.696294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f57b0 00:35:17.311 [2024-10-13 01:46:02.697453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.697492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.709279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f5378 00:35:17.311 [2024-10-13 01:46:02.710639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.710667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.721803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e1f80 00:35:17.311 [2024-10-13 01:46:02.722577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.722606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.733286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f31b8 00:35:17.311 [2024-10-13 01:46:02.734020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.734048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.747608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e95a0 00:35:17.311 [2024-10-13 01:46:02.749233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.749275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.760080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e2c28 00:35:17.311 [2024-10-13 01:46:02.762115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.762159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.768832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e2c28 00:35:17.311 [2024-10-13 01:46:02.769784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.769827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.781506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ea680 00:35:17.311 [2024-10-13 01:46:02.782564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.782591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.795256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e23b8 00:35:17.311 [2024-10-13 01:46:02.796655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.796684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.806590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e84c0 00:35:17.311 [2024-10-13 01:46:02.807904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.807947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.818244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e5220 00:35:17.311 [2024-10-13 01:46:02.819567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.819596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.830830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ef270 00:35:17.311 [2024-10-13 01:46:02.832241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.832268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.843340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e5a90 00:35:17.311 [2024-10-13 01:46:02.844930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.844973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.855853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e9e10 00:35:17.311 [2024-10-13 01:46:02.857601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.857630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.868490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166fc560 00:35:17.311 [2024-10-13 01:46:02.870391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.870418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:17.311 [2024-10-13 01:46:02.877302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f92c0 00:35:17.311 [2024-10-13 01:46:02.878182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.311 [2024-10-13 01:46:02.878209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.889603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e3d08 00:35:17.570 [2024-10-13 01:46:02.890454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:02.890503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.901908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e2c28 00:35:17.570 [2024-10-13 01:46:02.902966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:02.902993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.913266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e99d8 00:35:17.570 [2024-10-13 01:46:02.914216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:02.914243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.925852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e5220 00:35:17.570 [2024-10-13 01:46:02.927042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:02.927086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.937818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166f92c0 00:35:17.570 [2024-10-13 01:46:02.938544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:02.938573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.950421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166fc128 00:35:17.570 [2024-10-13 01:46:02.951298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:02.951335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.962050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e99d8 00:35:17.570 [2024-10-13 01:46:02.963257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:02.963285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.973187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166de470 00:35:17.570 [2024-10-13 01:46:02.974122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:02.974149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.984430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166eea00 00:35:17.570 [2024-10-13 01:46:02.985271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:02.985314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:02.999287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166eaef0 00:35:17.570 [2024-10-13 01:46:03.000696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.570 [2024-10-13 01:46:03.000725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:17.570 [2024-10-13 01:46:03.010709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166e1f80 00:35:17.570 [2024-10-13 01:46:03.011927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.011970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.022711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ddc00 00:35:17.571 [2024-10-13 01:46:03.024099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.024128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.035307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166fbcf0 00:35:17.571 [2024-10-13 01:46:03.036856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.036899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.047682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166de8a8 00:35:17.571 [2024-10-13 01:46:03.049155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.049199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.059081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166fdeb0 00:35:17.571 [2024-10-13 01:46:03.060429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.060479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.070478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.571 [2024-10-13 01:46:03.071169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.071197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.084199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.571 [2024-10-13 01:46:03.084462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.084497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.098085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.571 [2024-10-13 01:46:03.098344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.098372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.112072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.571 [2024-10-13 01:46:03.112279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.112306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.125878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.571 [2024-10-13 01:46:03.126135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.126163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.571 [2024-10-13 01:46:03.139621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.571 [2024-10-13 01:46:03.139869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.571 [2024-10-13 01:46:03.139896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.829 [2024-10-13 01:46:03.152919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.153129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.153157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.165674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.165895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.165921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.179556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.179738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.179766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.193406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.193634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.193663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.207441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.207666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.207694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.221267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.221522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.221552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.235277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.235496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.235526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.249001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.249262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.249291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.262935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.263145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.263173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.276684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.276922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.276950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 19901.00 IOPS, 77.74 MiB/s [2024-10-12T23:46:03.408Z] [2024-10-13 01:46:03.290687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.290899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.290936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.304571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.304819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.304847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.318459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.318740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.318768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.332522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.332734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.332762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.346423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.346642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.346677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.360305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.360558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.360586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.374315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.374568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.374596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.388242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.388507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.388536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:17.830 [2024-10-13 01:46:03.402234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:17.830 [2024-10-13 01:46:03.402533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:17.830 [2024-10-13 01:46:03.402560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.415729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.415975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.416003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.429433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.429629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.429657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.443095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.443364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.443392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.456737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.456950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.456977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.470321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.470578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.470605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.483973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.484251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.484278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.497552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.497789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.497817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.511216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.511439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.511465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.524827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.525084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.525111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.538603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.538844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.538872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.552224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.552507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.552535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.565991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.566258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.566286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.579800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.580043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.580070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.593763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.594024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.594050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.607782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.608043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.608070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.621910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.622139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.622167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.635984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.636205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.636232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.649992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.650202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.650239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.089 [2024-10-13 01:46:03.663913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.089 [2024-10-13 01:46:03.664159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.089 [2024-10-13 01:46:03.664187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.677463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.677658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.677685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.691365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.691580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.691608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.705496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.705776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.705804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.719599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.719877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.719905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.733627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.733885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.733912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.747752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.748000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.748028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.761875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.762142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.762169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.776096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.776367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.776394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.789913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.790206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.790234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.803507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.803754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.803781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.817158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.817404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.817431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.830812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.831049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.831076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.348 [2024-10-13 01:46:03.844645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.348 [2024-10-13 01:46:03.844930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.348 [2024-10-13 01:46:03.844958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.349 [2024-10-13 01:46:03.858319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.349 [2024-10-13 01:46:03.858616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.349 [2024-10-13 01:46:03.858644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.349 [2024-10-13 01:46:03.871835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.349 [2024-10-13 01:46:03.872067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.349 [2024-10-13 01:46:03.872095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.349 [2024-10-13 01:46:03.885457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.349 [2024-10-13 01:46:03.885695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.349 [2024-10-13 01:46:03.885722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.349 [2024-10-13 01:46:03.899106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.349 [2024-10-13 01:46:03.899340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.349 [2024-10-13 01:46:03.899367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.349 [2024-10-13 01:46:03.913611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.349 [2024-10-13 01:46:03.913947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.349 [2024-10-13 01:46:03.913973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:03.928127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:03.928393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:03.928421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:03.942534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:03.942868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:03.942894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:03.956924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:03.957164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:03.957191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:03.971410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:03.971655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:03.971683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:03.985805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:03.986074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:03.986101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.000285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.000539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.000566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.014730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.015027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.015060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.029347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.029594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.029622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.043648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.043896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.043922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.057978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.058241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.058283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.072348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.072598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.072625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.086926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.087172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.087199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.101336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.101671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.101699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.115848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.116092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.116133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.130331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.130602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.130630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.144823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.145093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.145128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.159254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.159497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.159526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.608 [2024-10-13 01:46:04.173740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.608 [2024-10-13 01:46:04.174008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.608 [2024-10-13 01:46:04.174051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.866 [2024-10-13 01:46:04.188070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.866 [2024-10-13 01:46:04.188280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.866 [2024-10-13 01:46:04.188307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.866 [2024-10-13 01:46:04.202398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.866 [2024-10-13 01:46:04.202658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.866 [2024-10-13 01:46:04.202686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.866 [2024-10-13 01:46:04.216848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.866 [2024-10-13 01:46:04.217063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.866 [2024-10-13 01:46:04.217105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.866 [2024-10-13 01:46:04.231285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.866 [2024-10-13 01:46:04.231545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.866 [2024-10-13 01:46:04.231573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.866 [2024-10-13 01:46:04.245647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.866 [2024-10-13 01:46:04.245894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.867 [2024-10-13 01:46:04.245922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.867 [2024-10-13 01:46:04.260114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.867 [2024-10-13 01:46:04.260378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.867 [2024-10-13 01:46:04.260421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.867 [2024-10-13 01:46:04.274524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.867 [2024-10-13 01:46:04.274849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.867 [2024-10-13 01:46:04.274891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.867 19043.00 IOPS, 74.39 MiB/s [2024-10-12T23:46:04.445Z] [2024-10-13 01:46:04.289076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12540) with pdu=0x2000166ec840 00:35:18.867 [2024-10-13 01:46:04.289310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:18.867 [2024-10-13 01:46:04.289354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:18.867 00:35:18.867 Latency(us) 00:35:18.867 [2024-10-12T23:46:04.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.867 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:18.867 nvme0n1 : 2.01 19037.70 74.37 0.00 0.00 6706.87 2682.12 18155.90 00:35:18.867 [2024-10-12T23:46:04.445Z] =================================================================================================================== 00:35:18.867 [2024-10-12T23:46:04.445Z] Total : 19037.70 74.37 0.00 0.00 6706.87 2682.12 18155.90 00:35:18.867 { 00:35:18.867 "results": [ 00:35:18.867 { 00:35:18.867 "job": "nvme0n1", 00:35:18.867 "core_mask": "0x2", 00:35:18.867 "workload": "randwrite", 00:35:18.867 "status": "finished", 00:35:18.867 "queue_depth": 128, 00:35:18.867 "io_size": 4096, 00:35:18.867 "runtime": 2.009381, 00:35:18.867 "iops": 19037.70365102487, 00:35:18.867 "mibps": 74.3660298868159, 00:35:18.867 "io_failed": 0, 00:35:18.867 "io_timeout": 0, 00:35:18.867 "avg_latency_us": 6706.873008603313, 00:35:18.867 "min_latency_us": 2682.1214814814816, 00:35:18.867 "max_latency_us": 18155.89925925926 00:35:18.867 } 00:35:18.867 ], 00:35:18.867 "core_count": 1 00:35:18.867 } 00:35:18.867 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:18.867 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:18.867 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:18.867 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:18.867 | .driver_specific 00:35:18.867 | .nvme_error 00:35:18.867 | .status_code 00:35:18.867 | .command_transient_transport_error' 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1756404 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1756404 ']' 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1756404 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1756404 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1756404' 00:35:19.125 killing process with pid 1756404 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1756404 00:35:19.125 Received shutdown signal, test time was about 2.000000 seconds 00:35:19.125 00:35:19.125 Latency(us) 00:35:19.125 [2024-10-12T23:46:04.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.125 [2024-10-12T23:46:04.703Z] =================================================================================================================== 00:35:19.125 [2024-10-12T23:46:04.703Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.125 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1756404 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1756834 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1756834 /var/tmp/bperf.sock 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1756834 ']' 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:19.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:19.384 01:46:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.384 [2024-10-13 01:46:04.854847] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:19.384 [2024-10-13 01:46:04.854927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756834 ] 00:35:19.384 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:19.384 Zero copy mechanism will not be used. 00:35:19.384 [2024-10-13 01:46:04.915881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.642 [2024-10-13 01:46:04.966406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.642 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:19.642 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:35:19.642 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:19.642 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:19.900 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:19.900 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:19.900 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:19.900 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:19.900 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:19.900 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:20.158 nvme0n1 00:35:20.158 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:20.158 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.158 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:20.158 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.158 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:20.158 01:46:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:20.416 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:20.416 Zero copy mechanism will not be used. 00:35:20.416 Running I/O for 2 seconds... 00:35:20.416 [2024-10-13 01:46:05.823443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.416 [2024-10-13 01:46:05.823820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.416 [2024-10-13 01:46:05.823874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.416 [2024-10-13 01:46:05.830400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.416 [2024-10-13 01:46:05.830735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.416 [2024-10-13 01:46:05.830792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.416 [2024-10-13 01:46:05.837297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.416 [2024-10-13 01:46:05.837630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.416 [2024-10-13 01:46:05.837661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.416 [2024-10-13 01:46:05.844643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.416 [2024-10-13 01:46:05.844992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.416 [2024-10-13 01:46:05.845025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.416 [2024-10-13 01:46:05.851581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.416 [2024-10-13 01:46:05.851922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.416 [2024-10-13 01:46:05.851955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.416 [2024-10-13 01:46:05.859231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.416 [2024-10-13 01:46:05.859565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.416 [2024-10-13 01:46:05.859605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.416 [2024-10-13 01:46:05.866803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.416 [2024-10-13 01:46:05.867136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.867169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.873279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.873644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.873674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.879977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.880410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.880443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.886773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.887109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.887142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.893616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.893989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.894022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.899336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.899662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.899693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.904978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.905325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.905357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.910851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.911165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.911193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.917499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.917956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.917988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.923321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.923648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.923678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.929053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.929367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.929399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.934712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.935069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.935101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.941121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.941436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.941478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.948138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.948495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.948540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.955328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.955694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.955722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.962260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.962619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.962650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.968032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.968348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.968381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.973840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.974195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.974228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.979639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.979992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.980024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.987436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.987818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.987852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.417 [2024-10-13 01:46:05.993433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.417 [2024-10-13 01:46:05.993759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.417 [2024-10-13 01:46:05.993815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:05.999144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:05.999458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:05.999503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.004763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.005092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:06.005125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.010516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.010825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:06.010857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.016094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.016445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:06.016493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.021922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.022241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:06.022279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.028426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.028757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:06.028791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.034409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.034772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:06.034807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.040109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.040456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:06.040497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.046045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.046390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:06.046422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.052521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.052923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.677 [2024-10-13 01:46:06.052955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.677 [2024-10-13 01:46:06.058603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.677 [2024-10-13 01:46:06.058970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.059002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.064861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.065159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.065191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.071219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.071553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.071582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.076874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.077229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.077261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.082528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.082863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.082897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.088108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.088458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.088515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.093731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.094061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.094093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.099363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.099715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.099744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.105573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.105970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.106002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.111722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.112055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.112089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.117349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.117669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.117699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.123009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.123360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.123392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.128751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.129123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.129154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.135201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.135535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.135565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.142376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.142693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.142723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.149179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.149553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.149581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.154967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.155282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.155314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.161139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.161530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.161559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.167856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.168250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.168282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.175033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.175404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.175436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.182136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.182452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.182498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.187950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.188300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.188332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.194056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.194370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.678 [2024-10-13 01:46:06.194402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.678 [2024-10-13 01:46:06.200002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.678 [2024-10-13 01:46:06.200317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.679 [2024-10-13 01:46:06.200350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.679 [2024-10-13 01:46:06.205700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.679 [2024-10-13 01:46:06.206021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.679 [2024-10-13 01:46:06.206054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.679 [2024-10-13 01:46:06.212186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.679 [2024-10-13 01:46:06.212584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.679 [2024-10-13 01:46:06.212627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.679 [2024-10-13 01:46:06.218848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.679 [2024-10-13 01:46:06.219261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.679 [2024-10-13 01:46:06.219293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.679 [2024-10-13 01:46:06.225525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.679 [2024-10-13 01:46:06.225889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.679 [2024-10-13 01:46:06.225921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.679 [2024-10-13 01:46:06.231368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.679 [2024-10-13 01:46:06.231685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.679 [2024-10-13 01:46:06.231714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.679 [2024-10-13 01:46:06.237219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.679 [2024-10-13 01:46:06.237556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.679 [2024-10-13 01:46:06.237585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.679 [2024-10-13 01:46:06.242930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.679 [2024-10-13 01:46:06.243284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.679 [2024-10-13 01:46:06.243316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.679 [2024-10-13 01:46:06.248591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.679 [2024-10-13 01:46:06.248938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.679 [2024-10-13 01:46:06.248970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.679 [2024-10-13 01:46:06.254813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.255157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.255186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.261200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.261627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.261670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.267930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.268327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.268373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.274054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.274384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.274412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.280503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.280841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.280869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.287040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.287460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.287520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.293639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.293949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.293979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.299569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.299940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.299984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.306108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.306544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.306574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.312982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.313285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.313313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.319866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.320186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.320214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.327384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.327492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.327520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.334850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.335212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.335241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.342014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.342357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.342385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.348465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.348800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.348828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.354618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.354904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.938 [2024-10-13 01:46:06.354933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.938 [2024-10-13 01:46:06.360734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.938 [2024-10-13 01:46:06.361039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.361068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.367022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.367338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.367366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.373426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.373796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.373825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.379758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.380073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.380102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.386096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.386413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.386441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.392820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.393137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.393179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.399846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.400180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.400208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.407137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.407511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.407539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.413932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.414255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.414283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.420141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.420460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.420497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.425867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.426199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.426227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.432022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.432343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.432372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.437598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.437884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.437912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.443096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.443393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.443421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.448635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.448923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.448951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.454295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.454375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.454422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.461085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.461421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.461450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.468574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.468890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.468919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.475177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.475504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.475534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.480867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.481185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.481215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.486769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.487092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.487120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.492261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.492569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.492598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.498362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.498727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.498757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.504556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.504853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.504881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:20.939 [2024-10-13 01:46:06.510116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:20.939 [2024-10-13 01:46:06.510420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:20.939 [2024-10-13 01:46:06.510448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.516055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.516364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.516393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.522444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.522762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.522791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.529091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.529393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.529420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.535593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.535902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.535931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.541877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.542263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.542292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.548675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.549075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.549107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.555592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.555887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.555915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.561663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.561959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.561987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.567380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.567677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.567706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.573084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.573425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.573454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.578624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.578915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.578943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.584664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.585033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.585061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.591301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.591627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.591656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.597736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.598094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.598122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.604689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.604878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.604906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.611343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.611655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.611684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.616925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.617276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.617310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.622603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.622919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.622947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.628168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.628509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.628539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.634267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.634587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.634615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.640895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.641211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.641241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.648548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.648838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.648866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.656035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.656393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.656426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.663265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.199 [2024-10-13 01:46:06.663674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.199 [2024-10-13 01:46:06.663703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.199 [2024-10-13 01:46:06.670013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.670378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.670406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.675823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.676162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.676206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.681316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.681662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.681691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.687082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.687386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.687415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.692490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.692846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.692874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.698488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.698871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.698899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.704800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.705101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.705129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.710945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.711249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.711276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.717379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.717723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.717752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.723756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.724112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.724145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.729998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.730344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.730391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.735703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.736082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.736126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.741461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.741870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.741916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.747250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.747590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.747618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.753052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.753384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.753412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.758718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.759001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.759029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.764229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.764563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.764592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.769856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.770192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.770220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.200 [2024-10-13 01:46:06.775456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.200 [2024-10-13 01:46:06.775796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.200 [2024-10-13 01:46:06.775824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.781218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.781565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.781594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.786854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.787204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.787233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.792562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.792875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.792904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.798187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.798560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.798589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.803904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.804239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.804267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.810231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.810534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.810562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.460 4929.00 IOPS, 616.12 MiB/s [2024-10-12T23:46:07.038Z] [2024-10-13 01:46:06.817010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.817274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.817304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.821733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.821958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.821986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.826698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.826955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.826983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.832287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.832555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.832585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.838166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.838447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.838487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.843964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.844213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.844243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.848732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.848993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.849022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.853572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.853798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.853826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.460 [2024-10-13 01:46:06.858276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.460 [2024-10-13 01:46:06.858519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.460 [2024-10-13 01:46:06.858548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.864102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.864364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.864393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.869448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.869690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.869725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.874327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.874573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.874603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.879206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.879540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.879570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.885318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.885630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.885659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.891092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.891325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.891353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.897392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.897681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.897709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.903852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.904121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.904150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.910253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.910524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.910553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.916528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.916772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.916800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.922573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.922806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.922834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.928114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.928365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.928393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.932842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.933088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.933118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.937527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.937770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.937798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.942197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.942443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.942479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.946845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.947117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.947146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.952087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.952442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.952478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.957694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.957924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.957953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.962833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.963071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.963100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.967645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.967878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.967907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.972955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.973212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.973241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.978741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.979012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.979041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.984653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.984866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.984895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.989361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.989607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.989636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.994166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.994399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:06.999069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:06.999306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:06.999334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:07.003989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:07.004221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:07.004250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:07.008885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.461 [2024-10-13 01:46:07.009127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.461 [2024-10-13 01:46:07.009164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.461 [2024-10-13 01:46:07.013670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.462 [2024-10-13 01:46:07.013881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.462 [2024-10-13 01:46:07.013909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.462 [2024-10-13 01:46:07.018489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.462 [2024-10-13 01:46:07.018826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.462 [2024-10-13 01:46:07.018855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.462 [2024-10-13 01:46:07.023718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.462 [2024-10-13 01:46:07.024036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.462 [2024-10-13 01:46:07.024065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.462 [2024-10-13 01:46:07.029050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.462 [2024-10-13 01:46:07.029382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.462 [2024-10-13 01:46:07.029414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.462 [2024-10-13 01:46:07.035368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.462 [2024-10-13 01:46:07.035596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.462 [2024-10-13 01:46:07.035634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.040644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.040948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.040980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.045815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.046049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.046082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.051123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.051468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.051530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.056514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.056738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.056766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.061703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.061945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.061977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.066904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.067168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.067199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.072158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.072367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.072399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.077258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.077503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.077549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.082614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.082834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.082865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.087983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.088274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.088306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.721 [2024-10-13 01:46:07.093385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.721 [2024-10-13 01:46:07.093628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.721 [2024-10-13 01:46:07.093657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.098823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.099064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.099103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.104148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.104329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.104360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.109133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.109333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.109364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.114027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.114245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.114276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.118813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.119025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.119056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.123677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.123859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.123890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.128498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.128701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.128729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.133337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.133567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.133596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.139986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.140264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.140297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.145964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.146137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.146168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.151744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.151978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.152009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.157439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.157652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.157681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.163311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.163562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.163591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.169190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.169382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.169414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.175226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.175450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.175493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.180799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.181007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.181039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.186225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.186410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.186442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.191384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.191586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.191614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.196486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.196677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.196705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.201595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.201768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.201796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.206534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.206706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.206734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.211855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.212041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.212072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.216976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.217152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.217183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.222171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.222335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.222366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.227443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.227625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.227654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.232763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.232964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.232995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.237911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.238100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.238138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.243129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.243306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.722 [2024-10-13 01:46:07.243337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.722 [2024-10-13 01:46:07.248330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.722 [2024-10-13 01:46:07.248537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.248566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.723 [2024-10-13 01:46:07.253907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.723 [2024-10-13 01:46:07.254075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.254106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.723 [2024-10-13 01:46:07.260017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.723 [2024-10-13 01:46:07.260204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.260235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.723 [2024-10-13 01:46:07.265110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.723 [2024-10-13 01:46:07.265382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.265413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.723 [2024-10-13 01:46:07.270682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.723 [2024-10-13 01:46:07.270888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.270919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.723 [2024-10-13 01:46:07.276177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.723 [2024-10-13 01:46:07.276456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.276497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.723 [2024-10-13 01:46:07.281310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.723 [2024-10-13 01:46:07.281518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.281547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.723 [2024-10-13 01:46:07.286948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.723 [2024-10-13 01:46:07.287257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.287289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.723 [2024-10-13 01:46:07.292250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.723 [2024-10-13 01:46:07.292451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.292494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.723 [2024-10-13 01:46:07.297613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.723 [2024-10-13 01:46:07.297847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.723 [2024-10-13 01:46:07.297878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.302503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.302726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.302755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.308007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.308321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.308353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.313410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.313635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.313663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.319154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.319340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.319372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.325366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.325663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.325691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.331878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.332114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.332145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.337063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.337241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.337273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.341970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.342158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.342189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.346833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.347061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.347092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.351937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.352131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.352162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.357449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.357676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.357705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.362992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.363252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.363283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.369297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.369554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.369584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.375135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.375346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.375379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.380728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.380947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.380987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.385781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.385982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.386013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.390572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.390813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.390843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.395283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.395481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.395526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.400073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.400256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.400287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.405736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.405980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.406011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.411736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.411937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.411968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.416706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.416896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.416927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.421445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.421644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.421672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.426670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.983 [2024-10-13 01:46:07.426930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.983 [2024-10-13 01:46:07.426961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.983 [2024-10-13 01:46:07.432178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.432411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.432442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.437230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.437413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.437445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.442338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.442614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.442643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.447283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.447513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.447558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.452880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.453108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.453139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.459171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.459328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.459359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.465440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.465654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.465683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.472213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.472364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.472395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.478844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.478989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.479021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.484534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.484638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.484666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.490217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.490339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.490370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.495638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.495777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.495808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.500433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.500577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.500605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.505323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.505534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.505562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.510876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.511023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.511054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.516588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.516781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.516811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.521993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.522230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.522261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.528263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.528508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.528553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.533775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.533955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.533985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.539483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.539658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.539686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.544601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.544816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.544846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.550081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.550263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.550295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:21.984 [2024-10-13 01:46:07.555667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:21.984 [2024-10-13 01:46:07.555861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.984 [2024-10-13 01:46:07.555891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.561114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.561365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.561398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.566738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.566930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.566961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.572241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.572432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.572462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.577705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.577890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.577923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.583170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.583432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.583463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.588979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.589213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.589245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.594516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.594665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.594693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.600235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.600381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.600411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.605855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.606012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.606043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.611031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.611239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.611270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.616374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.244 [2024-10-13 01:46:07.616537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.244 [2024-10-13 01:46:07.616572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.244 [2024-10-13 01:46:07.621336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.621562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.621591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.626477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.626688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.626716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.631557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.631765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.631811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.636364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.636605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.636633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.641616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.641836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.641867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.646957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.647210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.647240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.652267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.652420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.652451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.657598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.657762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.657809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.662793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.663001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.663032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.668086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.668331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.668363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.673324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.673540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.673568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.678658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.678884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.678915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.683832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.684063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.684091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.689020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.689265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.689293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.693802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.693954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.693982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.698324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.698536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.698565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.703410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.703592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.703621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.708591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.708737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.708766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.714059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.714187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.714218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.720239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.720422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.720452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.726098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.726249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.726280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.731885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.732119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.732150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.737569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.737809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.737839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.743255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.743447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.743487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.748855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.749019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.749050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.754581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.754776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.754814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.760124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.760246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.760277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.765306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.765445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.245 [2024-10-13 01:46:07.765483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.245 [2024-10-13 01:46:07.770532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.245 [2024-10-13 01:46:07.770726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.246 [2024-10-13 01:46:07.770754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.246 [2024-10-13 01:46:07.776388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.246 [2024-10-13 01:46:07.776629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.246 [2024-10-13 01:46:07.776658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.246 [2024-10-13 01:46:07.782520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.246 [2024-10-13 01:46:07.782678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.246 [2024-10-13 01:46:07.782705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.246 [2024-10-13 01:46:07.788877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.246 [2024-10-13 01:46:07.789045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.246 [2024-10-13 01:46:07.789077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.246 [2024-10-13 01:46:07.795016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.246 [2024-10-13 01:46:07.795189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.246 [2024-10-13 01:46:07.795220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.246 [2024-10-13 01:46:07.800335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.246 [2024-10-13 01:46:07.800463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.246 [2024-10-13 01:46:07.800518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:22.246 [2024-10-13 01:46:07.805455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.246 [2024-10-13 01:46:07.805604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.246 [2024-10-13 01:46:07.805631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:22.246 [2024-10-13 01:46:07.810221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.246 [2024-10-13 01:46:07.810336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.246 [2024-10-13 01:46:07.810366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:22.246 5341.50 IOPS, 667.69 MiB/s [2024-10-12T23:46:07.824Z] [2024-10-13 01:46:07.816111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b12880) with pdu=0x2000166fef90 00:35:22.246 [2024-10-13 01:46:07.816186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:22.246 [2024-10-13 01:46:07.816216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:22.246 00:35:22.246 Latency(us) 00:35:22.246 [2024-10-12T23:46:07.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.246 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:22.246 nvme0n1 : 2.00 5339.80 667.47 0.00 0.00 2988.61 2075.31 12233.39 00:35:22.246 [2024-10-12T23:46:07.824Z] =================================================================================================================== 00:35:22.246 [2024-10-12T23:46:07.824Z] Total : 5339.80 667.47 0.00 0.00 2988.61 2075.31 12233.39 00:35:22.246 { 00:35:22.246 "results": [ 00:35:22.246 { 00:35:22.246 "job": "nvme0n1", 00:35:22.246 "core_mask": "0x2", 00:35:22.246 "workload": "randwrite", 00:35:22.246 "status": "finished", 00:35:22.246 "queue_depth": 16, 00:35:22.246 "io_size": 131072, 00:35:22.246 "runtime": 2.003634, 00:35:22.246 "iops": 5339.797587782999, 00:35:22.246 "mibps": 667.4746984728748, 00:35:22.246 "io_failed": 0, 00:35:22.246 "io_timeout": 0, 00:35:22.246 "avg_latency_us": 2988.6086201202606, 00:35:22.246 "min_latency_us": 2075.306666666667, 00:35:22.246 "max_latency_us": 12233.386666666667 00:35:22.246 } 00:35:22.246 ], 00:35:22.246 "core_count": 1 00:35:22.246 } 00:35:22.515 01:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:22.515 01:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:22.515 01:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:22.515 01:46:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:22.515 | .driver_specific 00:35:22.515 | .nvme_error 00:35:22.515 | .status_code 00:35:22.515 | .command_transient_transport_error' 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 345 > 0 )) 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1756834 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1756834 ']' 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1756834 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1756834 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1756834' 00:35:22.773 killing process with pid 1756834 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1756834 00:35:22.773 Received shutdown signal, test time was about 2.000000 seconds 00:35:22.773 00:35:22.773 Latency(us) 00:35:22.773 [2024-10-12T23:46:08.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.773 [2024-10-12T23:46:08.351Z] =================================================================================================================== 00:35:22.773 [2024-10-12T23:46:08.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:22.773 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1756834 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1755472 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1755472 ']' 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1755472 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1755472 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1755472' 00:35:23.031 killing process with pid 1755472 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1755472 00:35:23.031 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1755472 00:35:23.289 00:35:23.289 real 0m15.065s 00:35:23.289 user 0m30.021s 00:35:23.289 sys 0m4.299s 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:23.289 ************************************ 00:35:23.289 END TEST nvmf_digest_error 00:35:23.289 ************************************ 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:23.289 rmmod nvme_tcp 00:35:23.289 rmmod nvme_fabrics 00:35:23.289 rmmod nvme_keyring 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 1755472 ']' 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 1755472 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1755472 ']' 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1755472 00:35:23.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1755472) - No such process 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1755472 is not found' 00:35:23.289 Process with pid 1755472 is not found 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:23.289 01:46:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.193 01:46:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:25.193 00:35:25.193 real 0m35.580s 00:35:25.193 user 1m2.590s 00:35:25.193 sys 0m10.293s 00:35:25.193 01:46:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:25.193 01:46:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.193 ************************************ 00:35:25.193 END TEST nvmf_digest 00:35:25.193 ************************************ 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.452 ************************************ 00:35:25.452 START TEST nvmf_bdevperf 00:35:25.452 ************************************ 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:25.452 * Looking for test storage... 00:35:25.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:25.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.452 --rc genhtml_branch_coverage=1 00:35:25.452 --rc genhtml_function_coverage=1 00:35:25.452 --rc genhtml_legend=1 00:35:25.452 --rc geninfo_all_blocks=1 00:35:25.452 --rc geninfo_unexecuted_blocks=1 00:35:25.452 00:35:25.452 ' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:25.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.452 --rc genhtml_branch_coverage=1 00:35:25.452 --rc genhtml_function_coverage=1 00:35:25.452 --rc genhtml_legend=1 00:35:25.452 --rc geninfo_all_blocks=1 00:35:25.452 --rc geninfo_unexecuted_blocks=1 00:35:25.452 00:35:25.452 ' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:25.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.452 --rc genhtml_branch_coverage=1 00:35:25.452 --rc genhtml_function_coverage=1 00:35:25.452 --rc genhtml_legend=1 00:35:25.452 --rc geninfo_all_blocks=1 00:35:25.452 --rc geninfo_unexecuted_blocks=1 00:35:25.452 00:35:25.452 ' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:25.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:25.452 --rc genhtml_branch_coverage=1 00:35:25.452 --rc genhtml_function_coverage=1 00:35:25.452 --rc genhtml_legend=1 00:35:25.452 --rc geninfo_all_blocks=1 00:35:25.452 --rc geninfo_unexecuted_blocks=1 00:35:25.452 00:35:25.452 ' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:25.452 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:25.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:25.453 01:46:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:27.985 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:27.985 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:27.985 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:27.985 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.985 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:35:27.985 00:35:27.985 --- 10.0.0.2 ping statistics --- 00:35:27.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.985 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:35:27.986 00:35:27.986 --- 10.0.0.1 ping statistics --- 00:35:27.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.986 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1759190 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1759190 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1759190 ']' 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.986 [2024-10-13 01:46:13.262033] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:27.986 [2024-10-13 01:46:13.262120] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.986 [2024-10-13 01:46:13.327105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:27.986 [2024-10-13 01:46:13.375701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.986 [2024-10-13 01:46:13.375760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.986 [2024-10-13 01:46:13.375773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.986 [2024-10-13 01:46:13.375785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.986 [2024-10-13 01:46:13.375794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.986 [2024-10-13 01:46:13.377355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:27.986 [2024-10-13 01:46:13.377419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.986 [2024-10-13 01:46:13.377422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:27.986 [2024-10-13 01:46:13.532967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.986 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:28.244 Malloc0 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:28.244 [2024-10-13 01:46:13.595873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:28.244 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:28.244 { 00:35:28.244 "params": { 00:35:28.244 "name": "Nvme$subsystem", 00:35:28.244 "trtype": "$TEST_TRANSPORT", 00:35:28.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:28.245 "adrfam": "ipv4", 00:35:28.245 "trsvcid": "$NVMF_PORT", 00:35:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:28.245 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:28.245 "hdgst": ${hdgst:-false}, 00:35:28.245 "ddgst": ${ddgst:-false} 00:35:28.245 }, 00:35:28.245 "method": "bdev_nvme_attach_controller" 00:35:28.245 } 00:35:28.245 EOF 00:35:28.245 )") 00:35:28.245 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:35:28.245 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:35:28.245 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:35:28.245 01:46:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:28.245 "params": { 00:35:28.245 "name": "Nvme1", 00:35:28.245 "trtype": "tcp", 00:35:28.245 "traddr": "10.0.0.2", 00:35:28.245 "adrfam": "ipv4", 00:35:28.245 "trsvcid": "4420", 00:35:28.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:28.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:28.245 "hdgst": false, 00:35:28.245 "ddgst": false 00:35:28.245 }, 00:35:28.245 "method": "bdev_nvme_attach_controller" 00:35:28.245 }' 00:35:28.245 [2024-10-13 01:46:13.646990] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:28.245 [2024-10-13 01:46:13.647071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1759219 ] 00:35:28.245 [2024-10-13 01:46:13.706198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.245 [2024-10-13 01:46:13.755278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.503 Running I/O for 1 seconds... 00:35:29.436 8553.00 IOPS, 33.41 MiB/s 00:35:29.436 Latency(us) 00:35:29.436 [2024-10-12T23:46:15.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.436 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:29.436 Verification LBA range: start 0x0 length 0x4000 00:35:29.436 Nvme1n1 : 1.01 8595.06 33.57 0.00 0.00 14824.48 2694.26 14369.37 00:35:29.436 [2024-10-12T23:46:15.014Z] =================================================================================================================== 00:35:29.436 [2024-10-12T23:46:15.014Z] Total : 8595.06 33.57 0.00 0.00 14824.48 2694.26 14369.37 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1759475 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:35:29.694 { 00:35:29.694 "params": { 00:35:29.694 "name": "Nvme$subsystem", 00:35:29.694 "trtype": "$TEST_TRANSPORT", 00:35:29.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.694 "adrfam": "ipv4", 00:35:29.694 "trsvcid": "$NVMF_PORT", 00:35:29.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.694 "hdgst": ${hdgst:-false}, 00:35:29.694 "ddgst": ${ddgst:-false} 00:35:29.694 }, 00:35:29.694 "method": "bdev_nvme_attach_controller" 00:35:29.694 } 00:35:29.694 EOF 00:35:29.694 )") 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:35:29.694 01:46:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:35:29.694 "params": { 00:35:29.694 "name": "Nvme1", 00:35:29.694 "trtype": "tcp", 00:35:29.694 "traddr": "10.0.0.2", 00:35:29.694 "adrfam": "ipv4", 00:35:29.694 "trsvcid": "4420", 00:35:29.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:29.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:29.694 "hdgst": false, 00:35:29.694 "ddgst": false 00:35:29.694 }, 00:35:29.694 "method": "bdev_nvme_attach_controller" 00:35:29.694 }' 00:35:29.694 [2024-10-13 01:46:15.234818] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:29.694 [2024-10-13 01:46:15.234915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1759475 ] 00:35:29.952 [2024-10-13 01:46:15.296433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.952 [2024-10-13 01:46:15.342954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.210 Running I/O for 15 seconds... 00:35:32.076 8501.00 IOPS, 33.21 MiB/s [2024-10-12T23:46:18.222Z] 8626.00 IOPS, 33.70 MiB/s [2024-10-12T23:46:18.222Z] 01:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1759190 00:35:32.644 01:46:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:32.644 [2024-10-13 01:46:18.201195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.201974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.201990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.644 [2024-10-13 01:46:18.202491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.644 [2024-10-13 01:46:18.202541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.644 [2024-10-13 01:46:18.202569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.644 [2024-10-13 01:46:18.202598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.644 [2024-10-13 01:46:18.202630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.644 [2024-10-13 01:46:18.202659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.644 [2024-10-13 01:46:18.202688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.644 [2024-10-13 01:46:18.202828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.644 [2024-10-13 01:46:18.202845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.645 [2024-10-13 01:46:18.202861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.202878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.645 [2024-10-13 01:46:18.202893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.202913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.645 [2024-10-13 01:46:18.202930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.202947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.645 [2024-10-13 01:46:18.202963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.202980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.645 [2024-10-13 01:46:18.202995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.645 [2024-10-13 01:46:18.203028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.645 [2024-10-13 01:46:18.203060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.645 [2024-10-13 01:46:18.203091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:32.645 [2024-10-13 01:46:18.203932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.203963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.203980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.204974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.204990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.205005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.645 [2024-10-13 01:46:18.205025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.645 [2024-10-13 01:46:18.205041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:32.646 [2024-10-13 01:46:18.205506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b4fa0 is same with the state(6) to be set 00:35:32.646 [2024-10-13 01:46:18.205555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:32.646 [2024-10-13 01:46:18.205567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:32.646 [2024-10-13 01:46:18.205578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49712 len:8 PRP1 0x0 PRP2 0x0 00:35:32.646 [2024-10-13 01:46:18.205590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:32.646 [2024-10-13 01:46:18.205654] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5b4fa0 was disconnected and freed. reset controller. 00:35:32.646 [2024-10-13 01:46:18.209450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.646 [2024-10-13 01:46:18.209548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.646 [2024-10-13 01:46:18.210256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.646 [2024-10-13 01:46:18.210288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.646 [2024-10-13 01:46:18.210307] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.646 [2024-10-13 01:46:18.210570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.646 [2024-10-13 01:46:18.210806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.646 [2024-10-13 01:46:18.210843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.646 [2024-10-13 01:46:18.210863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.646 [2024-10-13 01:46:18.214457] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.904 [2024-10-13 01:46:18.223717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.904 [2024-10-13 01:46:18.224198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.904 [2024-10-13 01:46:18.224250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.904 [2024-10-13 01:46:18.224269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.904 [2024-10-13 01:46:18.224521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.904 [2024-10-13 01:46:18.224764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.904 [2024-10-13 01:46:18.224788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.904 [2024-10-13 01:46:18.224803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.904 [2024-10-13 01:46:18.228368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.904 [2024-10-13 01:46:18.237624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.904 [2024-10-13 01:46:18.238026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.904 [2024-10-13 01:46:18.238059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.904 [2024-10-13 01:46:18.238077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.904 [2024-10-13 01:46:18.238315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.904 [2024-10-13 01:46:18.238572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.904 [2024-10-13 01:46:18.238597] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.904 [2024-10-13 01:46:18.238612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.242169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.251639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.252038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.252071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.252090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.252327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.252584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.252609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.252624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.256179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.265629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.266046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.266077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.266095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.266333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.266590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.266616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.266631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.270188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.279645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.280020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.280052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.280076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.280314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.280568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.280593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.280608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.284180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.293641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.294057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.294090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.294109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.294346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.294599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.294623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.294639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.298196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.307666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.308072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.308104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.308122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.308359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.308612] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.308636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.308652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.312208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.321666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.322055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.322087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.322105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.322343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.322597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.322628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.322645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.326200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.335621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.336018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.336050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.336068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.336304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.336570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.336593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.336607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.340204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.349664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.350020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.350053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.350071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.350309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.350580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.350604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.350618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.354155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.363624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.363987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.364015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.364031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.364272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.364540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.364562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.364576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.368131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.377634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.378035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.378067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.378085] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.378322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.378587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.905 [2024-10-13 01:46:18.378610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.905 [2024-10-13 01:46:18.378624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.905 [2024-10-13 01:46:18.382237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.905 [2024-10-13 01:46:18.391540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.905 [2024-10-13 01:46:18.391927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.905 [2024-10-13 01:46:18.391955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.905 [2024-10-13 01:46:18.391987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.905 [2024-10-13 01:46:18.392221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.905 [2024-10-13 01:46:18.392463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.906 [2024-10-13 01:46:18.392527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.906 [2024-10-13 01:46:18.392544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.906 [2024-10-13 01:46:18.396146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.906 [2024-10-13 01:46:18.405519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.906 [2024-10-13 01:46:18.405914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.906 [2024-10-13 01:46:18.405947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.906 [2024-10-13 01:46:18.405965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.906 [2024-10-13 01:46:18.406201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.906 [2024-10-13 01:46:18.406443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.906 [2024-10-13 01:46:18.406468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.906 [2024-10-13 01:46:18.406495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.906 [2024-10-13 01:46:18.410053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.906 [2024-10-13 01:46:18.419510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.906 [2024-10-13 01:46:18.419874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.906 [2024-10-13 01:46:18.419907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.906 [2024-10-13 01:46:18.419925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.906 [2024-10-13 01:46:18.420169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.906 [2024-10-13 01:46:18.420411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.906 [2024-10-13 01:46:18.420435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.906 [2024-10-13 01:46:18.420450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.906 [2024-10-13 01:46:18.424015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.906 [2024-10-13 01:46:18.433495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.906 [2024-10-13 01:46:18.433890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.906 [2024-10-13 01:46:18.433922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.906 [2024-10-13 01:46:18.433940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.906 [2024-10-13 01:46:18.434177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.906 [2024-10-13 01:46:18.434419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.906 [2024-10-13 01:46:18.434443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.906 [2024-10-13 01:46:18.434459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.906 [2024-10-13 01:46:18.438023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.906 [2024-10-13 01:46:18.447466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.906 [2024-10-13 01:46:18.447864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.906 [2024-10-13 01:46:18.447896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.906 [2024-10-13 01:46:18.447914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.906 [2024-10-13 01:46:18.448152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.906 [2024-10-13 01:46:18.448394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.906 [2024-10-13 01:46:18.448418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.906 [2024-10-13 01:46:18.448434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.906 [2024-10-13 01:46:18.452005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.906 [2024-10-13 01:46:18.461401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.906 [2024-10-13 01:46:18.461810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.906 [2024-10-13 01:46:18.461843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.906 [2024-10-13 01:46:18.461862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.906 [2024-10-13 01:46:18.462100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.906 [2024-10-13 01:46:18.462346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.906 [2024-10-13 01:46:18.462372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.906 [2024-10-13 01:46:18.462396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.906 [2024-10-13 01:46:18.466019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:32.906 [2024-10-13 01:46:18.475319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:32.906 [2024-10-13 01:46:18.475716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.906 [2024-10-13 01:46:18.475749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:32.906 [2024-10-13 01:46:18.475767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:32.906 [2024-10-13 01:46:18.476004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:32.906 [2024-10-13 01:46:18.476246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:32.906 [2024-10-13 01:46:18.476270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:32.906 [2024-10-13 01:46:18.476285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:32.906 [2024-10-13 01:46:18.479848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.165 [2024-10-13 01:46:18.489302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.165 [2024-10-13 01:46:18.489696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-10-13 01:46:18.489728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.165 [2024-10-13 01:46:18.489747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.165 [2024-10-13 01:46:18.489984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.165 [2024-10-13 01:46:18.490226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.165 [2024-10-13 01:46:18.490250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.165 [2024-10-13 01:46:18.490266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.165 [2024-10-13 01:46:18.493828] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.165 [2024-10-13 01:46:18.503288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.165 [2024-10-13 01:46:18.503743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-10-13 01:46:18.503775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.165 [2024-10-13 01:46:18.503793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.165 [2024-10-13 01:46:18.504031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.165 [2024-10-13 01:46:18.504273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.165 [2024-10-13 01:46:18.504297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.165 [2024-10-13 01:46:18.504313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.165 [2024-10-13 01:46:18.507874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.165 [2024-10-13 01:46:18.517118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.165 [2024-10-13 01:46:18.517566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-10-13 01:46:18.517599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.165 [2024-10-13 01:46:18.517617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.165 [2024-10-13 01:46:18.517855] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.165 [2024-10-13 01:46:18.518098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.165 [2024-10-13 01:46:18.518122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.165 [2024-10-13 01:46:18.518138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.165 [2024-10-13 01:46:18.521701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.165 [2024-10-13 01:46:18.531147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.165 [2024-10-13 01:46:18.531546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-10-13 01:46:18.531579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.165 [2024-10-13 01:46:18.531597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.165 [2024-10-13 01:46:18.531834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.165 [2024-10-13 01:46:18.532076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.165 [2024-10-13 01:46:18.532100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.165 [2024-10-13 01:46:18.532116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.165 [2024-10-13 01:46:18.535680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.165 [2024-10-13 01:46:18.545133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.165 [2024-10-13 01:46:18.545544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.165 [2024-10-13 01:46:18.545577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.165 [2024-10-13 01:46:18.545595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.165 [2024-10-13 01:46:18.545832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.165 [2024-10-13 01:46:18.546074] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.165 [2024-10-13 01:46:18.546098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.165 [2024-10-13 01:46:18.546113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.165 [2024-10-13 01:46:18.549692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.165 [2024-10-13 01:46:18.559132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.165 [2024-10-13 01:46:18.559526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.559558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.559576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.559813] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.560062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.560086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.560101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 [2024-10-13 01:46:18.563663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.573107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.573503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.573535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.573553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.573790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.574033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.574057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.574073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 [2024-10-13 01:46:18.577634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.587095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.587488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.587522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.587540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.587778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.588020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.588044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.588060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 7506.67 IOPS, 29.32 MiB/s [2024-10-12T23:46:18.744Z] [2024-10-13 01:46:18.593343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.601126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.601538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.601571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.601590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.601827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.602070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.602094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.602110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 [2024-10-13 01:46:18.605680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.615117] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.615529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.615561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.615579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.615818] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.616061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.616085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.616100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 [2024-10-13 01:46:18.619663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.629111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.629551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.629584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.629602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.629839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.630081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.630105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.630120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 [2024-10-13 01:46:18.633706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.642948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.643339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.643378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.643396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.643660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.643904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.643928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.643943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 [2024-10-13 01:46:18.647506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.656957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.657322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.657369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.657388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.657644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.657887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.657911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.657927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 [2024-10-13 01:46:18.661487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.670928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.671324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.671361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.671378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.671633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.671876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.671900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.671916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 [2024-10-13 01:46:18.675468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.684934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.685300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.685332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.685350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.685600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.685842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.685866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.685881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.166 [2024-10-13 01:46:18.689435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.166 [2024-10-13 01:46:18.698913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.166 [2024-10-13 01:46:18.699290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.166 [2024-10-13 01:46:18.699333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.166 [2024-10-13 01:46:18.699351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.166 [2024-10-13 01:46:18.699598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.166 [2024-10-13 01:46:18.699848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.166 [2024-10-13 01:46:18.699873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.166 [2024-10-13 01:46:18.699888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.167 [2024-10-13 01:46:18.703454] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.167 [2024-10-13 01:46:18.712814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.167 [2024-10-13 01:46:18.713211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-10-13 01:46:18.713251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.167 [2024-10-13 01:46:18.713269] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.167 [2024-10-13 01:46:18.713525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.167 [2024-10-13 01:46:18.713768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.167 [2024-10-13 01:46:18.713792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.167 [2024-10-13 01:46:18.713808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.167 [2024-10-13 01:46:18.717363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.167 [2024-10-13 01:46:18.726817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.167 [2024-10-13 01:46:18.727233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-10-13 01:46:18.727264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.167 [2024-10-13 01:46:18.727288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.167 [2024-10-13 01:46:18.727534] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.167 [2024-10-13 01:46:18.727778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.167 [2024-10-13 01:46:18.727802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.167 [2024-10-13 01:46:18.727818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.167 [2024-10-13 01:46:18.731377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.167 [2024-10-13 01:46:18.740827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.167 [2024-10-13 01:46:18.741168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.167 [2024-10-13 01:46:18.741201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.167 [2024-10-13 01:46:18.741219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.167 [2024-10-13 01:46:18.741457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.167 [2024-10-13 01:46:18.741710] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.167 [2024-10-13 01:46:18.741734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.167 [2024-10-13 01:46:18.741749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.425 [2024-10-13 01:46:18.745309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.425 [2024-10-13 01:46:18.754803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.425 [2024-10-13 01:46:18.755180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.425 [2024-10-13 01:46:18.755222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.425 [2024-10-13 01:46:18.755240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.755489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.755732] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.755756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.755771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.759326] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.768772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.769161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.769201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.769219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.769456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.769712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.769737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.769752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.773304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.782753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.783145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.783177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.783195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.783432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.783683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.783709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.783724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.787295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.796746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.797143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.797178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.797201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.797439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.797690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.797715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.797731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.801297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.810750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.811179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.811211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.811230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.811467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.811720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.811744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.811759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.815311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.824766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.825126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.825158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.825179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.825417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.825669] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.825694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.825709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.829266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.838713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.839075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.839109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.839127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.839364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.839618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.839643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.839665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.843219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.852680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.853077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.853114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.853132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.853374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.853627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.853653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.853668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.857219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.866663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.867061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.867103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.867121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.867362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.867615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.867640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.867656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.871209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.880597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.880987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.881026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.881044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.881286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.881542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.426 [2024-10-13 01:46:18.881567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.426 [2024-10-13 01:46:18.881582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.426 [2024-10-13 01:46:18.885136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.426 [2024-10-13 01:46:18.894617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.426 [2024-10-13 01:46:18.895019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.426 [2024-10-13 01:46:18.895061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.426 [2024-10-13 01:46:18.895079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.426 [2024-10-13 01:46:18.895321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.426 [2024-10-13 01:46:18.895576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.427 [2024-10-13 01:46:18.895601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.427 [2024-10-13 01:46:18.895616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.427 [2024-10-13 01:46:18.899169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.427 [2024-10-13 01:46:18.908626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.427 [2024-10-13 01:46:18.909000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.427 [2024-10-13 01:46:18.909032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.427 [2024-10-13 01:46:18.909050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.427 [2024-10-13 01:46:18.909287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.427 [2024-10-13 01:46:18.909539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.427 [2024-10-13 01:46:18.909565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.427 [2024-10-13 01:46:18.909580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.427 [2024-10-13 01:46:18.913133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.427 [2024-10-13 01:46:18.922590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.427 [2024-10-13 01:46:18.922982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.427 [2024-10-13 01:46:18.923022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.427 [2024-10-13 01:46:18.923040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.427 [2024-10-13 01:46:18.923282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.427 [2024-10-13 01:46:18.923535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.427 [2024-10-13 01:46:18.923560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.427 [2024-10-13 01:46:18.923576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.427 [2024-10-13 01:46:18.927128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.427 [2024-10-13 01:46:18.936574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.427 [2024-10-13 01:46:18.936960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.427 [2024-10-13 01:46:18.936992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.427 [2024-10-13 01:46:18.937010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.427 [2024-10-13 01:46:18.937252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.427 [2024-10-13 01:46:18.937506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.427 [2024-10-13 01:46:18.937531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.427 [2024-10-13 01:46:18.937547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.427 [2024-10-13 01:46:18.941100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.427 [2024-10-13 01:46:18.950564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.427 [2024-10-13 01:46:18.950962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.427 [2024-10-13 01:46:18.950993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.427 [2024-10-13 01:46:18.951011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.427 [2024-10-13 01:46:18.951248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.427 [2024-10-13 01:46:18.951501] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.427 [2024-10-13 01:46:18.951525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.427 [2024-10-13 01:46:18.951540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.427 [2024-10-13 01:46:18.955095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.427 [2024-10-13 01:46:18.964436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.427 [2024-10-13 01:46:18.964807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.427 [2024-10-13 01:46:18.964840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.427 [2024-10-13 01:46:18.964858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.427 [2024-10-13 01:46:18.965094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.427 [2024-10-13 01:46:18.965336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.427 [2024-10-13 01:46:18.965360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.427 [2024-10-13 01:46:18.965376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.427 [2024-10-13 01:46:18.968936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.427 [2024-10-13 01:46:18.978377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.427 [2024-10-13 01:46:18.978772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.427 [2024-10-13 01:46:18.978814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.427 [2024-10-13 01:46:18.978832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.427 [2024-10-13 01:46:18.979068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.427 [2024-10-13 01:46:18.979310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.427 [2024-10-13 01:46:18.979334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.427 [2024-10-13 01:46:18.979355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.427 [2024-10-13 01:46:18.982918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.427 [2024-10-13 01:46:18.992374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.427 [2024-10-13 01:46:18.992750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.427 [2024-10-13 01:46:18.992782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.427 [2024-10-13 01:46:18.992800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.427 [2024-10-13 01:46:18.993037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.427 [2024-10-13 01:46:18.993279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.427 [2024-10-13 01:46:18.993303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.427 [2024-10-13 01:46:18.993320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.427 [2024-10-13 01:46:18.996883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.686 [2024-10-13 01:46:19.006328] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.686 [2024-10-13 01:46:19.006732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.686 [2024-10-13 01:46:19.006764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.686 [2024-10-13 01:46:19.006783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.686 [2024-10-13 01:46:19.007024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.686 [2024-10-13 01:46:19.007267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.686 [2024-10-13 01:46:19.007290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.686 [2024-10-13 01:46:19.007306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.686 [2024-10-13 01:46:19.010873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.686 [2024-10-13 01:46:19.020308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.686 [2024-10-13 01:46:19.020797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.686 [2024-10-13 01:46:19.020853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.686 [2024-10-13 01:46:19.020871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.686 [2024-10-13 01:46:19.021109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.686 [2024-10-13 01:46:19.021351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.686 [2024-10-13 01:46:19.021375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.686 [2024-10-13 01:46:19.021391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.686 [2024-10-13 01:46:19.024951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.034181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.034641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.034678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.034697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.034933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.035176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.035200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.035215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.038780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.048012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.048401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.048442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.048460] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.048714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.048956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.048981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.048997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.052571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.062014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.062479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.062512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.062529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.062766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.063008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.063032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.063048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.066612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.075841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.076230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.076264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.076283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.076532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.076781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.076806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.076821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.080374] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.089836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.090229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.090272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.090290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.090539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.090782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.090806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.090821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.094376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.103836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.104235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.104272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.104290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.104543] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.104786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.104810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.104825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.108375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.117825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.118216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.118258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.118276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.118531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.118774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.118798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.118814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.122373] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.131834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.132195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.132226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.132246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.132494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.132736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.132760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.132775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.136327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.145775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.146138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.146170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.146188] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.146424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.146676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.146701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.146716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.150283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.159741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.160138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.160172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.160189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.160432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.160693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.160718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.687 [2024-10-13 01:46:19.160734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.687 [2024-10-13 01:46:19.164288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.687 [2024-10-13 01:46:19.173737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.687 [2024-10-13 01:46:19.174100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.687 [2024-10-13 01:46:19.174132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.687 [2024-10-13 01:46:19.174156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.687 [2024-10-13 01:46:19.174393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.687 [2024-10-13 01:46:19.174645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.687 [2024-10-13 01:46:19.174670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.688 [2024-10-13 01:46:19.174685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.688 [2024-10-13 01:46:19.178251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.688 [2024-10-13 01:46:19.187715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.688 [2024-10-13 01:46:19.188104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.688 [2024-10-13 01:46:19.188145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.688 [2024-10-13 01:46:19.188163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.688 [2024-10-13 01:46:19.188400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.688 [2024-10-13 01:46:19.188652] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.688 [2024-10-13 01:46:19.188677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.688 [2024-10-13 01:46:19.188693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.688 [2024-10-13 01:46:19.192245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.688 [2024-10-13 01:46:19.201737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.688 [2024-10-13 01:46:19.202135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.688 [2024-10-13 01:46:19.202168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.688 [2024-10-13 01:46:19.202186] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.688 [2024-10-13 01:46:19.202424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.688 [2024-10-13 01:46:19.202675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.688 [2024-10-13 01:46:19.202700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.688 [2024-10-13 01:46:19.202715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.688 [2024-10-13 01:46:19.206271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.688 [2024-10-13 01:46:19.215618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.688 [2024-10-13 01:46:19.216015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.688 [2024-10-13 01:46:19.216047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.688 [2024-10-13 01:46:19.216066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.688 [2024-10-13 01:46:19.216303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.688 [2024-10-13 01:46:19.216555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.688 [2024-10-13 01:46:19.216585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.688 [2024-10-13 01:46:19.216601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.688 [2024-10-13 01:46:19.220155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.688 [2024-10-13 01:46:19.229602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.688 [2024-10-13 01:46:19.229995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.688 [2024-10-13 01:46:19.230034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.688 [2024-10-13 01:46:19.230053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.688 [2024-10-13 01:46:19.230294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.688 [2024-10-13 01:46:19.230548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.688 [2024-10-13 01:46:19.230574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.688 [2024-10-13 01:46:19.230589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.688 [2024-10-13 01:46:19.234141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.688 [2024-10-13 01:46:19.243593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.688 [2024-10-13 01:46:19.243982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.688 [2024-10-13 01:46:19.244015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.688 [2024-10-13 01:46:19.244034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.688 [2024-10-13 01:46:19.244272] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.688 [2024-10-13 01:46:19.244523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.688 [2024-10-13 01:46:19.244549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.688 [2024-10-13 01:46:19.244565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.688 [2024-10-13 01:46:19.248120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.688 [2024-10-13 01:46:19.257582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.688 [2024-10-13 01:46:19.257981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.688 [2024-10-13 01:46:19.258013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.688 [2024-10-13 01:46:19.258031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.688 [2024-10-13 01:46:19.258268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.688 [2024-10-13 01:46:19.258519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.688 [2024-10-13 01:46:19.258545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.688 [2024-10-13 01:46:19.258560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.688 [2024-10-13 01:46:19.262118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.947 [2024-10-13 01:46:19.271572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.947 [2024-10-13 01:46:19.272040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.947 [2024-10-13 01:46:19.272073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.947 [2024-10-13 01:46:19.272091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.947 [2024-10-13 01:46:19.272328] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.947 [2024-10-13 01:46:19.272583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.947 [2024-10-13 01:46:19.272610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.947 [2024-10-13 01:46:19.272625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.947 [2024-10-13 01:46:19.276182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.947 [2024-10-13 01:46:19.285426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.947 [2024-10-13 01:46:19.285900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.947 [2024-10-13 01:46:19.285932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.947 [2024-10-13 01:46:19.285950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.947 [2024-10-13 01:46:19.286187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.947 [2024-10-13 01:46:19.286428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.948 [2024-10-13 01:46:19.286453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.948 [2024-10-13 01:46:19.286469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.948 [2024-10-13 01:46:19.290058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.948 [2024-10-13 01:46:19.299298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.948 [2024-10-13 01:46:19.299664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.948 [2024-10-13 01:46:19.299697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.948 [2024-10-13 01:46:19.299714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.948 [2024-10-13 01:46:19.299962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.948 [2024-10-13 01:46:19.300204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.948 [2024-10-13 01:46:19.300229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.948 [2024-10-13 01:46:19.300246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.948 [2024-10-13 01:46:19.303815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.948 [2024-10-13 01:46:19.313289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.948 [2024-10-13 01:46:19.313708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.948 [2024-10-13 01:46:19.313749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.948 [2024-10-13 01:46:19.313767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.948 [2024-10-13 01:46:19.314011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.948 [2024-10-13 01:46:19.314262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.948 [2024-10-13 01:46:19.314287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.948 [2024-10-13 01:46:19.314303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.948 [2024-10-13 01:46:19.317883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.948 [2024-10-13 01:46:19.327130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.948 [2024-10-13 01:46:19.327533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.948 [2024-10-13 01:46:19.327565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.948 [2024-10-13 01:46:19.327584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.948 [2024-10-13 01:46:19.327821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.948 [2024-10-13 01:46:19.328064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.948 [2024-10-13 01:46:19.328090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.948 [2024-10-13 01:46:19.328105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.948 [2024-10-13 01:46:19.331671] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.948 [2024-10-13 01:46:19.341138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.948 [2024-10-13 01:46:19.341533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.948 [2024-10-13 01:46:19.341565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.948 [2024-10-13 01:46:19.341583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.948 [2024-10-13 01:46:19.341820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.948 [2024-10-13 01:46:19.342064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.948 [2024-10-13 01:46:19.342089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.948 [2024-10-13 01:46:19.342105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.948 [2024-10-13 01:46:19.345677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.948 [2024-10-13 01:46:19.355146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.948 [2024-10-13 01:46:19.355539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.948 [2024-10-13 01:46:19.355572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.948 [2024-10-13 01:46:19.355591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.948 [2024-10-13 01:46:19.355829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.948 [2024-10-13 01:46:19.356071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.948 [2024-10-13 01:46:19.356097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.948 [2024-10-13 01:46:19.356118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.948 [2024-10-13 01:46:19.359691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.948 [2024-10-13 01:46:19.369136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.948 [2024-10-13 01:46:19.369528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.948 [2024-10-13 01:46:19.369562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.948 [2024-10-13 01:46:19.369581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.948 [2024-10-13 01:46:19.369819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.948 [2024-10-13 01:46:19.370062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.948 [2024-10-13 01:46:19.370090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.948 [2024-10-13 01:46:19.370106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.948 [2024-10-13 01:46:19.373673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.948 [2024-10-13 01:46:19.383116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.948 [2024-10-13 01:46:19.383486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.948 [2024-10-13 01:46:19.383519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.948 [2024-10-13 01:46:19.383537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.948 [2024-10-13 01:46:19.383775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.948 [2024-10-13 01:46:19.384016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.948 [2024-10-13 01:46:19.384041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.948 [2024-10-13 01:46:19.384057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.948 [2024-10-13 01:46:19.387637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.948 [2024-10-13 01:46:19.397084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.948 [2024-10-13 01:46:19.397486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.948 [2024-10-13 01:46:19.397518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.948 [2024-10-13 01:46:19.397536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.948 [2024-10-13 01:46:19.397773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.948 [2024-10-13 01:46:19.398013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.948 [2024-10-13 01:46:19.398039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.948 [2024-10-13 01:46:19.398054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.948 [2024-10-13 01:46:19.401632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.948 [2024-10-13 01:46:19.411074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.948 [2024-10-13 01:46:19.411440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.948 [2024-10-13 01:46:19.411489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.948 [2024-10-13 01:46:19.411513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.949 [2024-10-13 01:46:19.411752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.949 [2024-10-13 01:46:19.411993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.949 [2024-10-13 01:46:19.412018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.949 [2024-10-13 01:46:19.412034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.949 [2024-10-13 01:46:19.415599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.949 [2024-10-13 01:46:19.425046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.949 [2024-10-13 01:46:19.425444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.949 [2024-10-13 01:46:19.425486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.949 [2024-10-13 01:46:19.425506] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.949 [2024-10-13 01:46:19.425743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.949 [2024-10-13 01:46:19.425985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.949 [2024-10-13 01:46:19.426011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.949 [2024-10-13 01:46:19.426026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.949 [2024-10-13 01:46:19.429594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.949 [2024-10-13 01:46:19.439068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.949 [2024-10-13 01:46:19.439429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.949 [2024-10-13 01:46:19.439461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.949 [2024-10-13 01:46:19.439491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.949 [2024-10-13 01:46:19.439740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.949 [2024-10-13 01:46:19.439983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.949 [2024-10-13 01:46:19.440018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.949 [2024-10-13 01:46:19.440034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.949 [2024-10-13 01:46:19.443616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.949 [2024-10-13 01:46:19.453095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.949 [2024-10-13 01:46:19.453485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.949 [2024-10-13 01:46:19.453517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.949 [2024-10-13 01:46:19.453535] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.949 [2024-10-13 01:46:19.453772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.949 [2024-10-13 01:46:19.454021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.949 [2024-10-13 01:46:19.454045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.949 [2024-10-13 01:46:19.454061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.949 [2024-10-13 01:46:19.457631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.949 [2024-10-13 01:46:19.466999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.949 [2024-10-13 01:46:19.467389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.949 [2024-10-13 01:46:19.467421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.949 [2024-10-13 01:46:19.467440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.949 [2024-10-13 01:46:19.467686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.949 [2024-10-13 01:46:19.467929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.949 [2024-10-13 01:46:19.467954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.949 [2024-10-13 01:46:19.467969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.949 [2024-10-13 01:46:19.471535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.949 [2024-10-13 01:46:19.481009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.949 [2024-10-13 01:46:19.481403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.949 [2024-10-13 01:46:19.481435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.949 [2024-10-13 01:46:19.481453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.949 [2024-10-13 01:46:19.481700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.949 [2024-10-13 01:46:19.481943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.949 [2024-10-13 01:46:19.481967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.949 [2024-10-13 01:46:19.481983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.949 [2024-10-13 01:46:19.485545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.949 [2024-10-13 01:46:19.495013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.949 [2024-10-13 01:46:19.495405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.949 [2024-10-13 01:46:19.495437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.949 [2024-10-13 01:46:19.495455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.949 [2024-10-13 01:46:19.495703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.949 [2024-10-13 01:46:19.495946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.949 [2024-10-13 01:46:19.495970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.949 [2024-10-13 01:46:19.495985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.949 [2024-10-13 01:46:19.499550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.949 [2024-10-13 01:46:19.509026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.949 [2024-10-13 01:46:19.509417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.949 [2024-10-13 01:46:19.509450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.949 [2024-10-13 01:46:19.509468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.949 [2024-10-13 01:46:19.509716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.949 [2024-10-13 01:46:19.509958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.949 [2024-10-13 01:46:19.509983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.949 [2024-10-13 01:46:19.509999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:33.949 [2024-10-13 01:46:19.513563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:33.949 [2024-10-13 01:46:19.523009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:33.949 [2024-10-13 01:46:19.523402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.949 [2024-10-13 01:46:19.523434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:33.949 [2024-10-13 01:46:19.523452] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:33.949 [2024-10-13 01:46:19.523696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:33.949 [2024-10-13 01:46:19.523940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:33.949 [2024-10-13 01:46:19.523964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:33.949 [2024-10-13 01:46:19.523980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.527545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 [2024-10-13 01:46:19.536989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.537376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.537407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.209 [2024-10-13 01:46:19.537425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.209 [2024-10-13 01:46:19.537672] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.209 [2024-10-13 01:46:19.537915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.209 [2024-10-13 01:46:19.537940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.209 [2024-10-13 01:46:19.537955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.541521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 [2024-10-13 01:46:19.551000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.551387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.551430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.209 [2024-10-13 01:46:19.551453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.209 [2024-10-13 01:46:19.551700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.209 [2024-10-13 01:46:19.551943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.209 [2024-10-13 01:46:19.551967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.209 [2024-10-13 01:46:19.551982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.555556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 [2024-10-13 01:46:19.565014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.565415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.565447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.209 [2024-10-13 01:46:19.565465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.209 [2024-10-13 01:46:19.565712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.209 [2024-10-13 01:46:19.565955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.209 [2024-10-13 01:46:19.565979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.209 [2024-10-13 01:46:19.565995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.569568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 [2024-10-13 01:46:19.579040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.579455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.579513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.209 [2024-10-13 01:46:19.579532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.209 [2024-10-13 01:46:19.579769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.209 [2024-10-13 01:46:19.580019] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.209 [2024-10-13 01:46:19.580044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.209 [2024-10-13 01:46:19.580059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.583631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 5630.00 IOPS, 21.99 MiB/s [2024-10-12T23:46:19.787Z] [2024-10-13 01:46:19.594638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.594980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.595013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.209 [2024-10-13 01:46:19.595032] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.209 [2024-10-13 01:46:19.595271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.209 [2024-10-13 01:46:19.595529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.209 [2024-10-13 01:46:19.595563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.209 [2024-10-13 01:46:19.595580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.599137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 [2024-10-13 01:46:19.608599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.609068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.609101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.209 [2024-10-13 01:46:19.609120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.209 [2024-10-13 01:46:19.609357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.209 [2024-10-13 01:46:19.609608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.209 [2024-10-13 01:46:19.609634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.209 [2024-10-13 01:46:19.609650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.613202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 [2024-10-13 01:46:19.622443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.622869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.622902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.209 [2024-10-13 01:46:19.622920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.209 [2024-10-13 01:46:19.623157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.209 [2024-10-13 01:46:19.623398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.209 [2024-10-13 01:46:19.623423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.209 [2024-10-13 01:46:19.623438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.626998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 [2024-10-13 01:46:19.636447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.636924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.636980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.209 [2024-10-13 01:46:19.636997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.209 [2024-10-13 01:46:19.637234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.209 [2024-10-13 01:46:19.637487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.209 [2024-10-13 01:46:19.637520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.209 [2024-10-13 01:46:19.637536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.641120] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 [2024-10-13 01:46:19.650378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.650794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.650826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.209 [2024-10-13 01:46:19.650844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.209 [2024-10-13 01:46:19.651082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.209 [2024-10-13 01:46:19.651323] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.209 [2024-10-13 01:46:19.651348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.209 [2024-10-13 01:46:19.651364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.209 [2024-10-13 01:46:19.654934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.209 [2024-10-13 01:46:19.664376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.209 [2024-10-13 01:46:19.664751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.209 [2024-10-13 01:46:19.664785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.210 [2024-10-13 01:46:19.664803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.210 [2024-10-13 01:46:19.665041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.210 [2024-10-13 01:46:19.665282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.210 [2024-10-13 01:46:19.665308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.210 [2024-10-13 01:46:19.665323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.210 [2024-10-13 01:46:19.668892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.210 [2024-10-13 01:46:19.678341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.210 [2024-10-13 01:46:19.678739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.210 [2024-10-13 01:46:19.678772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.210 [2024-10-13 01:46:19.678790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.210 [2024-10-13 01:46:19.679028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.210 [2024-10-13 01:46:19.679269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.210 [2024-10-13 01:46:19.679294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.210 [2024-10-13 01:46:19.679311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.210 [2024-10-13 01:46:19.682881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.210 [2024-10-13 01:46:19.692345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.210 [2024-10-13 01:46:19.692731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.210 [2024-10-13 01:46:19.692764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.210 [2024-10-13 01:46:19.692782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.210 [2024-10-13 01:46:19.693025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.210 [2024-10-13 01:46:19.693267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.210 [2024-10-13 01:46:19.693292] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.210 [2024-10-13 01:46:19.693308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.210 [2024-10-13 01:46:19.696877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.210 [2024-10-13 01:46:19.706339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.210 [2024-10-13 01:46:19.706717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.210 [2024-10-13 01:46:19.706751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.210 [2024-10-13 01:46:19.706769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.210 [2024-10-13 01:46:19.707006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.210 [2024-10-13 01:46:19.707247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.210 [2024-10-13 01:46:19.707273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.210 [2024-10-13 01:46:19.707289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.210 [2024-10-13 01:46:19.710860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.210 [2024-10-13 01:46:19.720218] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.210 [2024-10-13 01:46:19.720630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.210 [2024-10-13 01:46:19.720663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.210 [2024-10-13 01:46:19.720681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.210 [2024-10-13 01:46:19.720919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.210 [2024-10-13 01:46:19.721160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.210 [2024-10-13 01:46:19.721185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.210 [2024-10-13 01:46:19.721202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.210 [2024-10-13 01:46:19.724772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.210 [2024-10-13 01:46:19.734221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.210 [2024-10-13 01:46:19.734606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.210 [2024-10-13 01:46:19.734638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.210 [2024-10-13 01:46:19.734656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.210 [2024-10-13 01:46:19.734893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.210 [2024-10-13 01:46:19.735134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.210 [2024-10-13 01:46:19.735160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.210 [2024-10-13 01:46:19.735181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.210 [2024-10-13 01:46:19.738752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.210 [2024-10-13 01:46:19.748210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.210 [2024-10-13 01:46:19.748584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.210 [2024-10-13 01:46:19.748617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.210 [2024-10-13 01:46:19.748635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.210 [2024-10-13 01:46:19.748873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.210 [2024-10-13 01:46:19.749115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.210 [2024-10-13 01:46:19.749140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.210 [2024-10-13 01:46:19.749156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.210 [2024-10-13 01:46:19.752737] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.210 [2024-10-13 01:46:19.762198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.210 [2024-10-13 01:46:19.762583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.210 [2024-10-13 01:46:19.762616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.210 [2024-10-13 01:46:19.762635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.210 [2024-10-13 01:46:19.762873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.210 [2024-10-13 01:46:19.763116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.210 [2024-10-13 01:46:19.763142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.210 [2024-10-13 01:46:19.763158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.210 [2024-10-13 01:46:19.766730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.210 [2024-10-13 01:46:19.776221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.210 [2024-10-13 01:46:19.776621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.210 [2024-10-13 01:46:19.776654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.210 [2024-10-13 01:46:19.776672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.210 [2024-10-13 01:46:19.776908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.210 [2024-10-13 01:46:19.777150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.210 [2024-10-13 01:46:19.777175] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.210 [2024-10-13 01:46:19.777192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.210 [2024-10-13 01:46:19.780762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.470 [2024-10-13 01:46:19.790230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.470 [2024-10-13 01:46:19.790586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-10-13 01:46:19.790625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.470 [2024-10-13 01:46:19.790645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.470 [2024-10-13 01:46:19.790883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.470 [2024-10-13 01:46:19.791126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.470 [2024-10-13 01:46:19.791151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.470 [2024-10-13 01:46:19.791166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.470 [2024-10-13 01:46:19.794731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.470 [2024-10-13 01:46:19.804193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.470 [2024-10-13 01:46:19.804571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-10-13 01:46:19.804604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.470 [2024-10-13 01:46:19.804623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.470 [2024-10-13 01:46:19.804861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.470 [2024-10-13 01:46:19.805104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.470 [2024-10-13 01:46:19.805129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.470 [2024-10-13 01:46:19.805145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.470 [2024-10-13 01:46:19.808719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.470 [2024-10-13 01:46:19.818170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.470 [2024-10-13 01:46:19.818563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-10-13 01:46:19.818596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.470 [2024-10-13 01:46:19.818614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.470 [2024-10-13 01:46:19.818852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.470 [2024-10-13 01:46:19.819094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.470 [2024-10-13 01:46:19.819119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.470 [2024-10-13 01:46:19.819134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.470 [2024-10-13 01:46:19.822707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.470 [2024-10-13 01:46:19.832172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.470 [2024-10-13 01:46:19.832566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.470 [2024-10-13 01:46:19.832599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.470 [2024-10-13 01:46:19.832618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.470 [2024-10-13 01:46:19.832856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.470 [2024-10-13 01:46:19.833104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.470 [2024-10-13 01:46:19.833131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.833147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.836732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.846193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.846577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.846610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.846629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.846867] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.847109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.847134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.847150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.850732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.860214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.860648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.860681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.860699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.860937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.861180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.861205] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.861221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.864790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.874245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.874645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.874678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.874696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.874934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.875175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.875200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.875216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.878788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.888252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.888662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.888696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.888714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.888951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.889206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.889233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.889249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.892818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.902282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.902696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.902729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.902748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.902986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.903230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.903255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.903271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.906841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.916288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.916667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.916700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.916718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.916954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.917195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.917220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.917236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.920806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.930252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.930651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.930683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.930706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.930944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.931185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.931210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.931225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.934794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.944245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.944619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.944651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.944669] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.944906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.945148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.945173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.945189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.948758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.958216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.958634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.958668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.958686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.958924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.959165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.959190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.959206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.962805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.972159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.972534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.972567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.972585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.972822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.471 [2024-10-13 01:46:19.973064] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.471 [2024-10-13 01:46:19.973095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.471 [2024-10-13 01:46:19.973112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.471 [2024-10-13 01:46:19.976680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.471 [2024-10-13 01:46:19.986129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.471 [2024-10-13 01:46:19.986518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.471 [2024-10-13 01:46:19.986551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.471 [2024-10-13 01:46:19.986569] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.471 [2024-10-13 01:46:19.986807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.472 [2024-10-13 01:46:19.987048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.472 [2024-10-13 01:46:19.987073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.472 [2024-10-13 01:46:19.987089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.472 [2024-10-13 01:46:19.990670] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.472 [2024-10-13 01:46:20.000118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.472 [2024-10-13 01:46:20.000518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-10-13 01:46:20.000552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.472 [2024-10-13 01:46:20.000571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.472 [2024-10-13 01:46:20.000809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.472 [2024-10-13 01:46:20.001061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.472 [2024-10-13 01:46:20.001089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.472 [2024-10-13 01:46:20.001105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.472 [2024-10-13 01:46:20.004688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.472 [2024-10-13 01:46:20.014107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.472 [2024-10-13 01:46:20.014510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-10-13 01:46:20.014547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.472 [2024-10-13 01:46:20.014567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.472 [2024-10-13 01:46:20.014807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.472 [2024-10-13 01:46:20.015051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.472 [2024-10-13 01:46:20.015077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.472 [2024-10-13 01:46:20.015093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.472 [2024-10-13 01:46:20.018660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.472 [2024-10-13 01:46:20.028113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.472 [2024-10-13 01:46:20.028509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-10-13 01:46:20.028544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.472 [2024-10-13 01:46:20.028563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.472 [2024-10-13 01:46:20.028801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.472 [2024-10-13 01:46:20.029045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.472 [2024-10-13 01:46:20.029071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.472 [2024-10-13 01:46:20.029087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.472 [2024-10-13 01:46:20.032650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.472 [2024-10-13 01:46:20.042294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.472 [2024-10-13 01:46:20.042787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.472 [2024-10-13 01:46:20.042838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.472 [2024-10-13 01:46:20.042867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.472 [2024-10-13 01:46:20.043232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.472 [2024-10-13 01:46:20.043503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.472 [2024-10-13 01:46:20.043530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.472 [2024-10-13 01:46:20.043547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.472 [2024-10-13 01:46:20.047113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.731 [2024-10-13 01:46:20.056167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.731 [2024-10-13 01:46:20.056645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.731 [2024-10-13 01:46:20.056679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.731 [2024-10-13 01:46:20.056698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.731 [2024-10-13 01:46:20.056936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.731 [2024-10-13 01:46:20.057177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.731 [2024-10-13 01:46:20.057201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.731 [2024-10-13 01:46:20.057218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.731 [2024-10-13 01:46:20.060789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.731 [2024-10-13 01:46:20.070031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.731 [2024-10-13 01:46:20.070434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.731 [2024-10-13 01:46:20.070467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.731 [2024-10-13 01:46:20.070501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.731 [2024-10-13 01:46:20.070747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.731 [2024-10-13 01:46:20.070991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.731 [2024-10-13 01:46:20.071016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.731 [2024-10-13 01:46:20.071032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.731 [2024-10-13 01:46:20.074598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.731 [2024-10-13 01:46:20.084049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.731 [2024-10-13 01:46:20.084448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.731 [2024-10-13 01:46:20.084489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.731 [2024-10-13 01:46:20.084510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.731 [2024-10-13 01:46:20.084751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.731 [2024-10-13 01:46:20.084995] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.731 [2024-10-13 01:46:20.085020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.731 [2024-10-13 01:46:20.085036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.731 [2024-10-13 01:46:20.088606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.731 [2024-10-13 01:46:20.097875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.731 [2024-10-13 01:46:20.098282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.731 [2024-10-13 01:46:20.098314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.731 [2024-10-13 01:46:20.098333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.098585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.098826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.098851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.098867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.102436] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.111895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.112337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.112370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.112389] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.112643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.112885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.112910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.112932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.116504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.125755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.126147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.126179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.126197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.126434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.126689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.126715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.126731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.130289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.139779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.140180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.140212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.140230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.140468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.140726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.140752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.140769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.144331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.153807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.154208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.154240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.154259] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.154509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.154751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.154777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.154793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.158352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.167812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.168216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.168254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.168273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.168524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.168766] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.168792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.168808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.172367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.181826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.182208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.182240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.182258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.182508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.182750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.182775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.182791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.186350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.195825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.196219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.196252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.196270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.196523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.196765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.196790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.196805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.200364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.209839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.210213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.210246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.210265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.210514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.210762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.210786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.210801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.214357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.223732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.224130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.224162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.224180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.224417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.224672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.224699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.224714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.228273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.732 [2024-10-13 01:46:20.237734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.732 [2024-10-13 01:46:20.238098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.732 [2024-10-13 01:46:20.238129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.732 [2024-10-13 01:46:20.238147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.732 [2024-10-13 01:46:20.238384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.732 [2024-10-13 01:46:20.238639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.732 [2024-10-13 01:46:20.238666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.732 [2024-10-13 01:46:20.238682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.732 [2024-10-13 01:46:20.242485] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.733 [2024-10-13 01:46:20.251751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.733 [2024-10-13 01:46:20.252147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.733 [2024-10-13 01:46:20.252179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.733 [2024-10-13 01:46:20.252197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.733 [2024-10-13 01:46:20.252435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.733 [2024-10-13 01:46:20.252688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.733 [2024-10-13 01:46:20.252713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.733 [2024-10-13 01:46:20.252729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.733 [2024-10-13 01:46:20.256313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.733 [2024-10-13 01:46:20.265582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.733 [2024-10-13 01:46:20.265997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.733 [2024-10-13 01:46:20.266029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.733 [2024-10-13 01:46:20.266048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.733 [2024-10-13 01:46:20.266285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.733 [2024-10-13 01:46:20.266537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.733 [2024-10-13 01:46:20.266562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.733 [2024-10-13 01:46:20.266577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.733 [2024-10-13 01:46:20.270136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.733 [2024-10-13 01:46:20.279604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.733 [2024-10-13 01:46:20.280051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.733 [2024-10-13 01:46:20.280084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.733 [2024-10-13 01:46:20.280102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.733 [2024-10-13 01:46:20.280341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.733 [2024-10-13 01:46:20.280595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.733 [2024-10-13 01:46:20.280621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.733 [2024-10-13 01:46:20.280636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.733 [2024-10-13 01:46:20.284191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.733 [2024-10-13 01:46:20.293466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.733 [2024-10-13 01:46:20.293914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.733 [2024-10-13 01:46:20.293948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.733 [2024-10-13 01:46:20.293968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.733 [2024-10-13 01:46:20.294205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.733 [2024-10-13 01:46:20.294448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.733 [2024-10-13 01:46:20.294484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.733 [2024-10-13 01:46:20.294503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.733 [2024-10-13 01:46:20.298062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.733 [2024-10-13 01:46:20.307319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.733 [2024-10-13 01:46:20.307677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.733 [2024-10-13 01:46:20.307711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.733 [2024-10-13 01:46:20.307738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.733 [2024-10-13 01:46:20.307978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.733 [2024-10-13 01:46:20.308220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.733 [2024-10-13 01:46:20.308245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.733 [2024-10-13 01:46:20.308261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.992 [2024-10-13 01:46:20.311835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.992 [2024-10-13 01:46:20.321303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.992 [2024-10-13 01:46:20.321710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.992 [2024-10-13 01:46:20.321747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.992 [2024-10-13 01:46:20.321765] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.992 [2024-10-13 01:46:20.322002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.992 [2024-10-13 01:46:20.322257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.992 [2024-10-13 01:46:20.322282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.992 [2024-10-13 01:46:20.322298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.992 [2024-10-13 01:46:20.325865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.992 [2024-10-13 01:46:20.335305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.992 [2024-10-13 01:46:20.335693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.992 [2024-10-13 01:46:20.335725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.992 [2024-10-13 01:46:20.335744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.992 [2024-10-13 01:46:20.335981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.992 [2024-10-13 01:46:20.336223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.992 [2024-10-13 01:46:20.336248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.992 [2024-10-13 01:46:20.336264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.992 [2024-10-13 01:46:20.339829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.992 [2024-10-13 01:46:20.349291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.992 [2024-10-13 01:46:20.349683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.992 [2024-10-13 01:46:20.349716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.992 [2024-10-13 01:46:20.349733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.992 [2024-10-13 01:46:20.349971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.992 [2024-10-13 01:46:20.350212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.992 [2024-10-13 01:46:20.350244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.992 [2024-10-13 01:46:20.350260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.992 [2024-10-13 01:46:20.353838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.992 [2024-10-13 01:46:20.363275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.992 [2024-10-13 01:46:20.363657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.992 [2024-10-13 01:46:20.363690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.992 [2024-10-13 01:46:20.363708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.992 [2024-10-13 01:46:20.363945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.992 [2024-10-13 01:46:20.364186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.992 [2024-10-13 01:46:20.364211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.992 [2024-10-13 01:46:20.364227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.992 [2024-10-13 01:46:20.367787] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.992 [2024-10-13 01:46:20.377232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.992 [2024-10-13 01:46:20.377645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.992 [2024-10-13 01:46:20.377677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.992 [2024-10-13 01:46:20.377696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.377933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.378173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.378198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.378214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.993 [2024-10-13 01:46:20.381779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.993 [2024-10-13 01:46:20.391238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.993 [2024-10-13 01:46:20.391645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.993 [2024-10-13 01:46:20.391679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.993 [2024-10-13 01:46:20.391697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.391934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.392176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.392201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.392218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.993 [2024-10-13 01:46:20.395780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.993 [2024-10-13 01:46:20.405239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.993 [2024-10-13 01:46:20.405640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.993 [2024-10-13 01:46:20.405673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.993 [2024-10-13 01:46:20.405692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.405929] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.406174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.406199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.406214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.993 [2024-10-13 01:46:20.409817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.993 [2024-10-13 01:46:20.419254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.993 [2024-10-13 01:46:20.419631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.993 [2024-10-13 01:46:20.419664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.993 [2024-10-13 01:46:20.419683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.419919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.420160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.420187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.420203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.993 [2024-10-13 01:46:20.423768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.993 [2024-10-13 01:46:20.433205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.993 [2024-10-13 01:46:20.433575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.993 [2024-10-13 01:46:20.433607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.993 [2024-10-13 01:46:20.433625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.433862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.434103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.434128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.434145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.993 [2024-10-13 01:46:20.437709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.993 [2024-10-13 01:46:20.447158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.993 [2024-10-13 01:46:20.447532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.993 [2024-10-13 01:46:20.447565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.993 [2024-10-13 01:46:20.447583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.447827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.448068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.448094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.448110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.993 [2024-10-13 01:46:20.451684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.993 [2024-10-13 01:46:20.461127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.993 [2024-10-13 01:46:20.461519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.993 [2024-10-13 01:46:20.461552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.993 [2024-10-13 01:46:20.461570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.461808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.462049] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.462074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.462090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.993 [2024-10-13 01:46:20.465656] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.993 [2024-10-13 01:46:20.475009] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.993 [2024-10-13 01:46:20.475425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.993 [2024-10-13 01:46:20.475459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.993 [2024-10-13 01:46:20.475488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.475727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.475969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.475994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.476009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.993 [2024-10-13 01:46:20.479570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.993 [2024-10-13 01:46:20.489015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.993 [2024-10-13 01:46:20.489394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.993 [2024-10-13 01:46:20.489427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.993 [2024-10-13 01:46:20.489445] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.489694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.489938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.489964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.489986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.993 [2024-10-13 01:46:20.493568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.993 [2024-10-13 01:46:20.503019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.993 [2024-10-13 01:46:20.503409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.993 [2024-10-13 01:46:20.503442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.993 [2024-10-13 01:46:20.503461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.993 [2024-10-13 01:46:20.503709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.993 [2024-10-13 01:46:20.503953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.993 [2024-10-13 01:46:20.503978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.993 [2024-10-13 01:46:20.503994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.994 [2024-10-13 01:46:20.507559] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.994 [2024-10-13 01:46:20.516993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.994 [2024-10-13 01:46:20.517377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.994 [2024-10-13 01:46:20.517410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.994 [2024-10-13 01:46:20.517429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.994 [2024-10-13 01:46:20.517677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.994 [2024-10-13 01:46:20.517921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.994 [2024-10-13 01:46:20.517946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.994 [2024-10-13 01:46:20.517962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.994 [2024-10-13 01:46:20.521519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.994 [2024-10-13 01:46:20.530962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.994 [2024-10-13 01:46:20.531358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.994 [2024-10-13 01:46:20.531391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.994 [2024-10-13 01:46:20.531409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.994 [2024-10-13 01:46:20.531658] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.994 [2024-10-13 01:46:20.531900] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.994 [2024-10-13 01:46:20.531926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.994 [2024-10-13 01:46:20.531941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.994 [2024-10-13 01:46:20.535501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.994 [2024-10-13 01:46:20.544944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.994 [2024-10-13 01:46:20.545333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.994 [2024-10-13 01:46:20.545371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.994 [2024-10-13 01:46:20.545390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.994 [2024-10-13 01:46:20.545640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.994 [2024-10-13 01:46:20.545892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.994 [2024-10-13 01:46:20.545917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.994 [2024-10-13 01:46:20.545933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.994 [2024-10-13 01:46:20.549492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.994 [2024-10-13 01:46:20.558949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.994 [2024-10-13 01:46:20.559337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.994 [2024-10-13 01:46:20.559370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:34.994 [2024-10-13 01:46:20.559388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:34.994 [2024-10-13 01:46:20.559637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:34.994 [2024-10-13 01:46:20.559881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.994 [2024-10-13 01:46:20.559907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.994 [2024-10-13 01:46:20.559923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.994 [2024-10-13 01:46:20.563486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.253 [2024-10-13 01:46:20.572927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.253 [2024-10-13 01:46:20.573315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.253 [2024-10-13 01:46:20.573347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.253 [2024-10-13 01:46:20.573365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.253 [2024-10-13 01:46:20.573613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.253 [2024-10-13 01:46:20.573855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.573880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.573895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.577219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.586397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.586853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.586897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.586914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.587148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.587380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.587402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.587416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.590547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 4504.00 IOPS, 17.59 MiB/s [2024-10-12T23:46:20.832Z] [2024-10-13 01:46:20.599759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.600160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.600189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.600205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.600460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.600675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.600710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.600723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.603802] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.613078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.613491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.613521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.613538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.613767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.613997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.614017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.614029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.617131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.626537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.626878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.626907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.626923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.627145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.627353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.627373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.627386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.630590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.640019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.640393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.640421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.640438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.640675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.640904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.640924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.640937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.643994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.653272] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.653652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.653681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.653698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.653948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.654141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.654171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.654184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.657176] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.666556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.666974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.667003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.667019] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.667259] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.667482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.667505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.667534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.670545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.679870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.680231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.680258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.680279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.680525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.680762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.680785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.680798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.683767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.693160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.693557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.693586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.693602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.693843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.694043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.694063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.694076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.697020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.706444] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.706828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.254 [2024-10-13 01:46:20.706857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.254 [2024-10-13 01:46:20.706873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.254 [2024-10-13 01:46:20.707115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.254 [2024-10-13 01:46:20.707322] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.254 [2024-10-13 01:46:20.707342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.254 [2024-10-13 01:46:20.707355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.254 [2024-10-13 01:46:20.710344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.254 [2024-10-13 01:46:20.719766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.254 [2024-10-13 01:46:20.720184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.255 [2024-10-13 01:46:20.720214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.255 [2024-10-13 01:46:20.720230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.255 [2024-10-13 01:46:20.720461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.255 [2024-10-13 01:46:20.720680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.255 [2024-10-13 01:46:20.720707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.255 [2024-10-13 01:46:20.720721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.255 [2024-10-13 01:46:20.724197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.255 [2024-10-13 01:46:20.733057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.255 [2024-10-13 01:46:20.733413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.255 [2024-10-13 01:46:20.733441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.255 [2024-10-13 01:46:20.733481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.255 [2024-10-13 01:46:20.733712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.255 [2024-10-13 01:46:20.733940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.255 [2024-10-13 01:46:20.733960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.255 [2024-10-13 01:46:20.733972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.255 [2024-10-13 01:46:20.736952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.255 [2024-10-13 01:46:20.746305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.255 [2024-10-13 01:46:20.746684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.255 [2024-10-13 01:46:20.746714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.255 [2024-10-13 01:46:20.746730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.255 [2024-10-13 01:46:20.746967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.255 [2024-10-13 01:46:20.747175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.255 [2024-10-13 01:46:20.747195] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.255 [2024-10-13 01:46:20.747207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.255 [2024-10-13 01:46:20.750190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.255 [2024-10-13 01:46:20.759649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.255 [2024-10-13 01:46:20.759978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.255 [2024-10-13 01:46:20.760007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.255 [2024-10-13 01:46:20.760023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.255 [2024-10-13 01:46:20.760242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.255 [2024-10-13 01:46:20.760451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.255 [2024-10-13 01:46:20.760492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.255 [2024-10-13 01:46:20.760508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.255 [2024-10-13 01:46:20.763465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.255 [2024-10-13 01:46:20.772913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.255 [2024-10-13 01:46:20.773326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.255 [2024-10-13 01:46:20.773355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.255 [2024-10-13 01:46:20.773370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.255 [2024-10-13 01:46:20.773632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.255 [2024-10-13 01:46:20.773865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.255 [2024-10-13 01:46:20.773885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.255 [2024-10-13 01:46:20.773898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.255 [2024-10-13 01:46:20.776837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.255 [2024-10-13 01:46:20.786182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.255 [2024-10-13 01:46:20.786508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.255 [2024-10-13 01:46:20.786544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.255 [2024-10-13 01:46:20.786560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.255 [2024-10-13 01:46:20.786781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.255 [2024-10-13 01:46:20.786989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.255 [2024-10-13 01:46:20.787009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.255 [2024-10-13 01:46:20.787022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.255 [2024-10-13 01:46:20.789998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.255 [2024-10-13 01:46:20.799475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.255 [2024-10-13 01:46:20.799878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.255 [2024-10-13 01:46:20.799906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.255 [2024-10-13 01:46:20.799921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.255 [2024-10-13 01:46:20.800134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.255 [2024-10-13 01:46:20.800342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.255 [2024-10-13 01:46:20.800362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.255 [2024-10-13 01:46:20.800374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.255 [2024-10-13 01:46:20.803371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.255 [2024-10-13 01:46:20.812778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.255 [2024-10-13 01:46:20.813192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.255 [2024-10-13 01:46:20.813222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.255 [2024-10-13 01:46:20.813238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.255 [2024-10-13 01:46:20.813495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.255 [2024-10-13 01:46:20.813715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.255 [2024-10-13 01:46:20.813737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.255 [2024-10-13 01:46:20.813752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.255 [2024-10-13 01:46:20.816721] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.255 [2024-10-13 01:46:20.826000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.255 [2024-10-13 01:46:20.826324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.255 [2024-10-13 01:46:20.826353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.255 [2024-10-13 01:46:20.826371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.255 [2024-10-13 01:46:20.826595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.255 [2024-10-13 01:46:20.826827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.255 [2024-10-13 01:46:20.826849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.255 [2024-10-13 01:46:20.826877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.255 [2024-10-13 01:46:20.830146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.515 [2024-10-13 01:46:20.839377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.515 [2024-10-13 01:46:20.839783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.515 [2024-10-13 01:46:20.839811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.515 [2024-10-13 01:46:20.839827] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.515 [2024-10-13 01:46:20.840050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.515 [2024-10-13 01:46:20.840258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.515 [2024-10-13 01:46:20.840277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.515 [2024-10-13 01:46:20.840289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.515 [2024-10-13 01:46:20.843279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.515 [2024-10-13 01:46:20.852798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.515 [2024-10-13 01:46:20.853134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.515 [2024-10-13 01:46:20.853163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.515 [2024-10-13 01:46:20.853179] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.515 [2024-10-13 01:46:20.853400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.515 [2024-10-13 01:46:20.853643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.515 [2024-10-13 01:46:20.853665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.515 [2024-10-13 01:46:20.853683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.515 [2024-10-13 01:46:20.856765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.515 [2024-10-13 01:46:20.866091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.515 [2024-10-13 01:46:20.866446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.515 [2024-10-13 01:46:20.866483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.515 [2024-10-13 01:46:20.866501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.515 [2024-10-13 01:46:20.866730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.515 [2024-10-13 01:46:20.866941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.515 [2024-10-13 01:46:20.866960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.515 [2024-10-13 01:46:20.866973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.515 [2024-10-13 01:46:20.870053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.515 [2024-10-13 01:46:20.879423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.515 [2024-10-13 01:46:20.879864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.515 [2024-10-13 01:46:20.879892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.515 [2024-10-13 01:46:20.879907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.515 [2024-10-13 01:46:20.880123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.515 [2024-10-13 01:46:20.880332] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.515 [2024-10-13 01:46:20.880351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.515 [2024-10-13 01:46:20.880364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.515 [2024-10-13 01:46:20.883372] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.515 [2024-10-13 01:46:20.892672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.515 [2024-10-13 01:46:20.893044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.515 [2024-10-13 01:46:20.893072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.515 [2024-10-13 01:46:20.893087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.515 [2024-10-13 01:46:20.893302] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.515 [2024-10-13 01:46:20.893537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.515 [2024-10-13 01:46:20.893574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.515 [2024-10-13 01:46:20.893587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.515 [2024-10-13 01:46:20.896538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.515 [2024-10-13 01:46:20.905952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.515 [2024-10-13 01:46:20.906311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.515 [2024-10-13 01:46:20.906340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.515 [2024-10-13 01:46:20.906356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.515 [2024-10-13 01:46:20.906589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.515 [2024-10-13 01:46:20.906823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.515 [2024-10-13 01:46:20.906843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.515 [2024-10-13 01:46:20.906856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.515 [2024-10-13 01:46:20.909796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.515 [2024-10-13 01:46:20.919254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.515 [2024-10-13 01:46:20.919634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.515 [2024-10-13 01:46:20.919664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.515 [2024-10-13 01:46:20.919680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.515 [2024-10-13 01:46:20.919893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.515 [2024-10-13 01:46:20.920102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.515 [2024-10-13 01:46:20.920121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.515 [2024-10-13 01:46:20.920133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:20.923133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:20.932565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:20.933010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:20.933039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:20.933055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:20.933295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:20.933530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:20.933551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:20.933565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:20.936508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:20.945809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:20.946163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:20.946192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:20.946209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:20.946448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:20.946660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:20.946681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:20.946694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:20.949659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:20.959010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:20.959364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:20.959393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:20.959410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:20.959649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:20.959899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:20.959919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:20.959932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:20.962872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:20.972295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:20.972688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:20.972718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:20.972734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:20.972962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:20.973212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:20.973239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:20.973254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:20.976744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:20.985659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:20.986051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:20.986079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:20.986095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:20.986330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:20.986565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:20.986587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:20.986601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:20.989679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:20.998964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:20.999380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:20.999408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:20.999424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:20.999688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:20.999916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:20.999937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:20.999951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:21.002977] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:21.012239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:21.012635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:21.012665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:21.012682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:21.012921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:21.013132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:21.013153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:21.013166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:21.016155] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:21.025556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:21.025952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:21.025981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:21.025998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:21.026243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:21.026465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:21.026495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:21.026508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:21.029475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:21.038751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:21.039117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:21.039145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:21.039166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:21.039401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:21.039644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:21.039665] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:21.039679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:21.042630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:21.052041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.516 [2024-10-13 01:46:21.052398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.516 [2024-10-13 01:46:21.052430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.516 [2024-10-13 01:46:21.052446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.516 [2024-10-13 01:46:21.052687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.516 [2024-10-13 01:46:21.052915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.516 [2024-10-13 01:46:21.052936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.516 [2024-10-13 01:46:21.052950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.516 [2024-10-13 01:46:21.055891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.516 [2024-10-13 01:46:21.065270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.517 [2024-10-13 01:46:21.065659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.517 [2024-10-13 01:46:21.065689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.517 [2024-10-13 01:46:21.065706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.517 [2024-10-13 01:46:21.065947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.517 [2024-10-13 01:46:21.066140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.517 [2024-10-13 01:46:21.066160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.517 [2024-10-13 01:46:21.066173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.517 [2024-10-13 01:46:21.069220] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.517 [2024-10-13 01:46:21.078589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.517 [2024-10-13 01:46:21.078927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.517 [2024-10-13 01:46:21.078956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.517 [2024-10-13 01:46:21.078973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.517 [2024-10-13 01:46:21.079194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.517 [2024-10-13 01:46:21.079403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.517 [2024-10-13 01:46:21.079428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.517 [2024-10-13 01:46:21.079441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.517 [2024-10-13 01:46:21.082420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.517 [2024-10-13 01:46:21.092236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.776 [2024-10-13 01:46:21.092626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.776 [2024-10-13 01:46:21.092655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.776 [2024-10-13 01:46:21.092671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.776 [2024-10-13 01:46:21.092900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.776 [2024-10-13 01:46:21.093133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.776 [2024-10-13 01:46:21.093155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.776 [2024-10-13 01:46:21.093169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.776 [2024-10-13 01:46:21.096212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.776 [2024-10-13 01:46:21.105409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.776 [2024-10-13 01:46:21.105792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.776 [2024-10-13 01:46:21.105823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.776 [2024-10-13 01:46:21.105840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.776 [2024-10-13 01:46:21.106083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.776 [2024-10-13 01:46:21.106293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.776 [2024-10-13 01:46:21.106314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.776 [2024-10-13 01:46:21.106326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.776 [2024-10-13 01:46:21.109293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.776 [2024-10-13 01:46:21.118710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.776 [2024-10-13 01:46:21.119099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.776 [2024-10-13 01:46:21.119127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.776 [2024-10-13 01:46:21.119142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.776 [2024-10-13 01:46:21.119358] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.776 [2024-10-13 01:46:21.119610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.776 [2024-10-13 01:46:21.119633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.776 [2024-10-13 01:46:21.119646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.776 [2024-10-13 01:46:21.122565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.776 [2024-10-13 01:46:21.131965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.776 [2024-10-13 01:46:21.132319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.776 [2024-10-13 01:46:21.132347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.776 [2024-10-13 01:46:21.132363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.776 [2024-10-13 01:46:21.132601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.776 [2024-10-13 01:46:21.132850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.776 [2024-10-13 01:46:21.132870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.776 [2024-10-13 01:46:21.132884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.776 [2024-10-13 01:46:21.135822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.776 [2024-10-13 01:46:21.145250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.776 [2024-10-13 01:46:21.145633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.776 [2024-10-13 01:46:21.145663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.776 [2024-10-13 01:46:21.145681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.776 [2024-10-13 01:46:21.145920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.776 [2024-10-13 01:46:21.146128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.776 [2024-10-13 01:46:21.146149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.776 [2024-10-13 01:46:21.146161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.776 [2024-10-13 01:46:21.149147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.776 [2024-10-13 01:46:21.158527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.776 [2024-10-13 01:46:21.158935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.776 [2024-10-13 01:46:21.158965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.776 [2024-10-13 01:46:21.158982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.776 [2024-10-13 01:46:21.159223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.776 [2024-10-13 01:46:21.159432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.776 [2024-10-13 01:46:21.159467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.776 [2024-10-13 01:46:21.159490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.776 [2024-10-13 01:46:21.162447] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.776 [2024-10-13 01:46:21.171835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.776 [2024-10-13 01:46:21.172187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.776 [2024-10-13 01:46:21.172217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.776 [2024-10-13 01:46:21.172234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.776 [2024-10-13 01:46:21.172486] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.776 [2024-10-13 01:46:21.172708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.776 [2024-10-13 01:46:21.172730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.776 [2024-10-13 01:46:21.172743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.776 [2024-10-13 01:46:21.175701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.776 [2024-10-13 01:46:21.185128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.776 [2024-10-13 01:46:21.185544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.776 [2024-10-13 01:46:21.185573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.776 [2024-10-13 01:46:21.185589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.776 [2024-10-13 01:46:21.185831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.776 [2024-10-13 01:46:21.186023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.776 [2024-10-13 01:46:21.186044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.776 [2024-10-13 01:46:21.186056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.776 [2024-10-13 01:46:21.189052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1759190 Killed "${NVMF_APP[@]}" "$@" 00:35:35.776 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:35.776 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:35.776 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:35.776 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:35.776 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.777 [2024-10-13 01:46:21.198412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.198778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.198807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.777 [2024-10-13 01:46:21.198823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.777 [2024-10-13 01:46:21.199044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.777 [2024-10-13 01:46:21.199252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.777 [2024-10-13 01:46:21.199273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.777 [2024-10-13 01:46:21.199286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.777 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=1760142 00:35:35.777 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:35.777 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 1760142 00:35:35.777 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1760142 ']' 00:35:35.777 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.777 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:35.777 [2024-10-13 01:46:21.202346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.777 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.777 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:35.777 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.777 [2024-10-13 01:46:21.211743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.212192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.212222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.777 [2024-10-13 01:46:21.212240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.777 [2024-10-13 01:46:21.212488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.777 [2024-10-13 01:46:21.212709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.777 [2024-10-13 01:46:21.212730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.777 [2024-10-13 01:46:21.212759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.777 [2024-10-13 01:46:21.215849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.777 [2024-10-13 01:46:21.225087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.225413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.225456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.777 [2024-10-13 01:46:21.225481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.777 [2024-10-13 01:46:21.225696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.777 [2024-10-13 01:46:21.225937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.777 [2024-10-13 01:46:21.225957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.777 [2024-10-13 01:46:21.225970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.777 [2024-10-13 01:46:21.229513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.777 [2024-10-13 01:46:21.238363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.238790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.238819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.777 [2024-10-13 01:46:21.238835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.777 [2024-10-13 01:46:21.239057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.777 [2024-10-13 01:46:21.239266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.777 [2024-10-13 01:46:21.239290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.777 [2024-10-13 01:46:21.239303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.777 [2024-10-13 01:46:21.242365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.777 [2024-10-13 01:46:21.251477] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:35.777 [2024-10-13 01:46:21.251580] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.777 [2024-10-13 01:46:21.251727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.252110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.252137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.777 [2024-10-13 01:46:21.252154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.777 [2024-10-13 01:46:21.252367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.777 [2024-10-13 01:46:21.252625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.777 [2024-10-13 01:46:21.252648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.777 [2024-10-13 01:46:21.252670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.777 [2024-10-13 01:46:21.255677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.777 [2024-10-13 01:46:21.264982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.265284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.265327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.777 [2024-10-13 01:46:21.265342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.777 [2024-10-13 01:46:21.265584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.777 [2024-10-13 01:46:21.265813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.777 [2024-10-13 01:46:21.265832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.777 [2024-10-13 01:46:21.265845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.777 [2024-10-13 01:46:21.268843] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.777 [2024-10-13 01:46:21.278191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.278617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.278646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.777 [2024-10-13 01:46:21.278672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.777 [2024-10-13 01:46:21.278915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.777 [2024-10-13 01:46:21.279124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.777 [2024-10-13 01:46:21.279143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.777 [2024-10-13 01:46:21.279160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.777 [2024-10-13 01:46:21.282171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.777 [2024-10-13 01:46:21.291572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.291956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.291996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.777 [2024-10-13 01:46:21.292012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.777 [2024-10-13 01:46:21.292251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.777 [2024-10-13 01:46:21.292450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.777 [2024-10-13 01:46:21.292477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.777 [2024-10-13 01:46:21.292508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.777 [2024-10-13 01:46:21.295630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.777 [2024-10-13 01:46:21.304962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.305290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.305319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.777 [2024-10-13 01:46:21.305335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.777 [2024-10-13 01:46:21.305564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.777 [2024-10-13 01:46:21.305784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.777 [2024-10-13 01:46:21.305815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.777 [2024-10-13 01:46:21.305827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.777 [2024-10-13 01:46:21.308922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.777 [2024-10-13 01:46:21.317444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:35.777 [2024-10-13 01:46:21.318270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.777 [2024-10-13 01:46:21.318689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.777 [2024-10-13 01:46:21.318718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.778 [2024-10-13 01:46:21.318744] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.778 [2024-10-13 01:46:21.318983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.778 [2024-10-13 01:46:21.319181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.778 [2024-10-13 01:46:21.319202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.778 [2024-10-13 01:46:21.319215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.778 [2024-10-13 01:46:21.322409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.778 [2024-10-13 01:46:21.331697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.778 [2024-10-13 01:46:21.332258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.778 [2024-10-13 01:46:21.332309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.778 [2024-10-13 01:46:21.332329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.778 [2024-10-13 01:46:21.332606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.778 [2024-10-13 01:46:21.332824] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.778 [2024-10-13 01:46:21.332846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.778 [2024-10-13 01:46:21.332878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.778 [2024-10-13 01:46:21.335934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:35.778 [2024-10-13 01:46:21.345109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:35.778 [2024-10-13 01:46:21.345498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:35.778 [2024-10-13 01:46:21.345528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:35.778 [2024-10-13 01:46:21.345545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:35.778 [2024-10-13 01:46:21.345776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:35.778 [2024-10-13 01:46:21.345992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:35.778 [2024-10-13 01:46:21.346012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:35.778 [2024-10-13 01:46:21.346026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:35.778 [2024-10-13 01:46:21.349063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.037 [2024-10-13 01:46:21.358678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.037 [2024-10-13 01:46:21.359104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.037 [2024-10-13 01:46:21.359134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.037 [2024-10-13 01:46:21.359151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.037 [2024-10-13 01:46:21.359390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.037 [2024-10-13 01:46:21.359641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.037 [2024-10-13 01:46:21.359664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.037 [2024-10-13 01:46:21.359678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.037 [2024-10-13 01:46:21.362873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.037 [2024-10-13 01:46:21.365490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.037 [2024-10-13 01:46:21.365540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.037 [2024-10-13 01:46:21.365563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.037 [2024-10-13 01:46:21.365575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.037 [2024-10-13 01:46:21.365593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.037 [2024-10-13 01:46:21.366995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:36.037 [2024-10-13 01:46:21.367064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:36.037 [2024-10-13 01:46:21.367067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.037 [2024-10-13 01:46:21.372243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.037 [2024-10-13 01:46:21.372740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.037 [2024-10-13 01:46:21.372785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.037 [2024-10-13 01:46:21.372804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.037 [2024-10-13 01:46:21.373048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.037 [2024-10-13 01:46:21.373263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.037 [2024-10-13 01:46:21.373285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.037 [2024-10-13 01:46:21.373302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.037 [2024-10-13 01:46:21.376563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.037 [2024-10-13 01:46:21.385967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.037 [2024-10-13 01:46:21.386508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.037 [2024-10-13 01:46:21.386557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.037 [2024-10-13 01:46:21.386577] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.037 [2024-10-13 01:46:21.386815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.037 [2024-10-13 01:46:21.387032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.037 [2024-10-13 01:46:21.387054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.037 [2024-10-13 01:46:21.387072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.037 [2024-10-13 01:46:21.390261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.037 [2024-10-13 01:46:21.399688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.037 [2024-10-13 01:46:21.400233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.037 [2024-10-13 01:46:21.400283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.037 [2024-10-13 01:46:21.400303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.037 [2024-10-13 01:46:21.400537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.037 [2024-10-13 01:46:21.400761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.400799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.400816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.038 [2024-10-13 01:46:21.404053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.038 [2024-10-13 01:46:21.413247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.038 [2024-10-13 01:46:21.413824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.038 [2024-10-13 01:46:21.413876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.038 [2024-10-13 01:46:21.413896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.038 [2024-10-13 01:46:21.414133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.038 [2024-10-13 01:46:21.414349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.414371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.414387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.038 [2024-10-13 01:46:21.417653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.038 [2024-10-13 01:46:21.426838] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.038 [2024-10-13 01:46:21.427335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.038 [2024-10-13 01:46:21.427384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.038 [2024-10-13 01:46:21.427404] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.038 [2024-10-13 01:46:21.427636] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.038 [2024-10-13 01:46:21.427875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.427897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.427914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.038 [2024-10-13 01:46:21.431140] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.038 [2024-10-13 01:46:21.440450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.038 [2024-10-13 01:46:21.441043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.038 [2024-10-13 01:46:21.441093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.038 [2024-10-13 01:46:21.441113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.038 [2024-10-13 01:46:21.441351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.038 [2024-10-13 01:46:21.441597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.441620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.441637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.038 [2024-10-13 01:46:21.444893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.038 [2024-10-13 01:46:21.454054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.038 [2024-10-13 01:46:21.454463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.038 [2024-10-13 01:46:21.454500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.038 [2024-10-13 01:46:21.454526] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.038 [2024-10-13 01:46:21.454751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.038 [2024-10-13 01:46:21.454979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.455000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.455014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.038 [2024-10-13 01:46:21.458232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.038 [2024-10-13 01:46:21.467532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.038 [2024-10-13 01:46:21.467880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.038 [2024-10-13 01:46:21.467910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.038 [2024-10-13 01:46:21.467927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.038 [2024-10-13 01:46:21.468142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.038 [2024-10-13 01:46:21.468370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.468394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.468408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.038 [2024-10-13 01:46:21.471611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:36.038 [2024-10-13 01:46:21.481083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.038 [2024-10-13 01:46:21.481432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.038 [2024-10-13 01:46:21.481463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.038 [2024-10-13 01:46:21.481489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.038 [2024-10-13 01:46:21.481704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.038 [2024-10-13 01:46:21.481921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.481944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.481958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.038 [2024-10-13 01:46:21.485212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.038 [2024-10-13 01:46:21.494607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.038 [2024-10-13 01:46:21.494940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.038 [2024-10-13 01:46:21.494970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.038 [2024-10-13 01:46:21.494987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.038 [2024-10-13 01:46:21.495222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.038 [2024-10-13 01:46:21.495435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.495480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.495497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.038 [2024-10-13 01:46:21.498730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.038 [2024-10-13 01:46:21.508080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.038 [2024-10-13 01:46:21.508445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.038 [2024-10-13 01:46:21.508482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.038 [2024-10-13 01:46:21.508501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.038 [2024-10-13 01:46:21.508716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.038 [2024-10-13 01:46:21.508944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.508966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.508981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.038 [2024-10-13 01:46:21.509351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.038 [2024-10-13 01:46:21.512201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.038 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.038 [2024-10-13 01:46:21.521721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.038 [2024-10-13 01:46:21.522110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.038 [2024-10-13 01:46:21.522141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.038 [2024-10-13 01:46:21.522159] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.038 [2024-10-13 01:46:21.522391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.038 [2024-10-13 01:46:21.522645] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.038 [2024-10-13 01:46:21.522668] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.038 [2024-10-13 01:46:21.522684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.039 [2024-10-13 01:46:21.525934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.039 [2024-10-13 01:46:21.535183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.039 [2024-10-13 01:46:21.535549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.039 [2024-10-13 01:46:21.535580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.039 [2024-10-13 01:46:21.535598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.039 [2024-10-13 01:46:21.535831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.039 [2024-10-13 01:46:21.536053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.039 [2024-10-13 01:46:21.536076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.039 [2024-10-13 01:46:21.536090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.039 [2024-10-13 01:46:21.539255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.039 [2024-10-13 01:46:21.548736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.039 [2024-10-13 01:46:21.549128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.039 [2024-10-13 01:46:21.549158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.039 [2024-10-13 01:46:21.549175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.039 [2024-10-13 01:46:21.549390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.039 [2024-10-13 01:46:21.549647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.039 [2024-10-13 01:46:21.549671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.039 [2024-10-13 01:46:21.549686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.039 [2024-10-13 01:46:21.552949] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.039 Malloc0 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.039 [2024-10-13 01:46:21.562361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.039 [2024-10-13 01:46:21.562746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.039 [2024-10-13 01:46:21.562776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.039 [2024-10-13 01:46:21.562794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.039 [2024-10-13 01:46:21.563025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.039 [2024-10-13 01:46:21.563254] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.039 [2024-10-13 01:46:21.563277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.039 [2024-10-13 01:46:21.563292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.039 [2024-10-13 01:46:21.566563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.039 [2024-10-13 01:46:21.575933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.039 [2024-10-13 01:46:21.576331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.039 [2024-10-13 01:46:21.576359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b8db0 with addr=10.0.0.2, port=4420 00:35:36.039 [2024-10-13 01:46:21.576375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b8db0 is same with the state(6) to be set 00:35:36.039 [2024-10-13 01:46:21.576598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5b8db0 (9): Bad file descriptor 00:35:36.039 [2024-10-13 01:46:21.576847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.039 [2024-10-13 01:46:21.576868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.039 [2024-10-13 01:46:21.576882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.039 [2024-10-13 01:46:21.576950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:36.039 [2024-10-13 01:46:21.580082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:36.039 01:46:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1759475 00:35:36.039 [2024-10-13 01:46:21.589412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:36.297 3753.33 IOPS, 14.66 MiB/s [2024-10-12T23:46:21.875Z] [2024-10-13 01:46:21.755115] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:38.164 4267.14 IOPS, 16.67 MiB/s [2024-10-12T23:46:24.676Z] 4833.38 IOPS, 18.88 MiB/s [2024-10-12T23:46:26.050Z] 5255.67 IOPS, 20.53 MiB/s [2024-10-12T23:46:26.674Z] 5594.10 IOPS, 21.85 MiB/s [2024-10-12T23:46:27.632Z] 5881.82 IOPS, 22.98 MiB/s [2024-10-12T23:46:29.004Z] 6121.58 IOPS, 23.91 MiB/s [2024-10-12T23:46:29.936Z] 6323.15 IOPS, 24.70 MiB/s [2024-10-12T23:46:30.870Z] 6491.93 IOPS, 25.36 MiB/s [2024-10-12T23:46:30.870Z] 6632.27 IOPS, 25.91 MiB/s 00:35:45.292 Latency(us) 00:35:45.292 [2024-10-12T23:46:30.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.292 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:45.292 Verification LBA range: start 0x0 length 0x4000 00:35:45.292 Nvme1n1 : 15.01 6633.66 25.91 9137.90 0.00 8091.06 843.47 16893.72 00:35:45.292 [2024-10-12T23:46:30.870Z] =================================================================================================================== 00:35:45.292 [2024-10-12T23:46:30.870Z] Total : 6633.66 25.91 9137.90 0.00 8091.06 843.47 16893.72 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:45.292 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:45.292 rmmod nvme_tcp 00:35:45.292 rmmod nvme_fabrics 00:35:45.550 rmmod nvme_keyring 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 1760142 ']' 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 1760142 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1760142 ']' 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1760142 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1760142 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1760142' 00:35:45.550 killing process with pid 1760142 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1760142 00:35:45.550 01:46:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1760142 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.809 01:46:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.711 01:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:47.711 00:35:47.711 real 0m22.407s 00:35:47.711 user 0m59.962s 00:35:47.711 sys 0m4.159s 00:35:47.711 01:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:47.711 01:46:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:47.711 ************************************ 00:35:47.711 END TEST nvmf_bdevperf 00:35:47.711 ************************************ 00:35:47.711 01:46:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:47.711 01:46:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:47.711 01:46:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:47.711 01:46:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:47.711 ************************************ 00:35:47.711 START TEST nvmf_target_disconnect 00:35:47.711 ************************************ 00:35:47.711 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:47.970 * Looking for test storage... 00:35:47.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:47.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.970 --rc genhtml_branch_coverage=1 00:35:47.970 --rc genhtml_function_coverage=1 00:35:47.970 --rc genhtml_legend=1 00:35:47.970 --rc geninfo_all_blocks=1 00:35:47.970 --rc geninfo_unexecuted_blocks=1 00:35:47.970 00:35:47.970 ' 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:47.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.970 --rc genhtml_branch_coverage=1 00:35:47.970 --rc genhtml_function_coverage=1 00:35:47.970 --rc genhtml_legend=1 00:35:47.970 --rc geninfo_all_blocks=1 00:35:47.970 --rc geninfo_unexecuted_blocks=1 00:35:47.970 00:35:47.970 ' 00:35:47.970 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:47.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.971 --rc genhtml_branch_coverage=1 00:35:47.971 --rc genhtml_function_coverage=1 00:35:47.971 --rc genhtml_legend=1 00:35:47.971 --rc geninfo_all_blocks=1 00:35:47.971 --rc geninfo_unexecuted_blocks=1 00:35:47.971 00:35:47.971 ' 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:47.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:47.971 --rc genhtml_branch_coverage=1 00:35:47.971 --rc genhtml_function_coverage=1 00:35:47.971 --rc genhtml_legend=1 00:35:47.971 --rc geninfo_all_blocks=1 00:35:47.971 --rc geninfo_unexecuted_blocks=1 00:35:47.971 00:35:47.971 ' 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:47.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:47.971 01:46:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:49.874 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:49.874 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:49.874 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:49.874 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:49.874 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:49.875 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:49.875 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:49.875 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:49.875 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:49.875 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:50.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:50.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:35:50.135 00:35:50.135 --- 10.0.0.2 ping statistics --- 00:35:50.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.135 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:50.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:50.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:35:50.135 00:35:50.135 --- 10.0.0.1 ping statistics --- 00:35:50.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.135 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:50.135 ************************************ 00:35:50.135 START TEST nvmf_target_disconnect_tc1 00:35:50.135 ************************************ 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.135 [2024-10-13 01:46:35.671877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.135 [2024-10-13 01:46:35.671961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10a5ac0 with addr=10.0.0.2, port=4420 00:35:50.135 [2024-10-13 01:46:35.672006] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:50.135 [2024-10-13 01:46:35.672036] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:50.135 [2024-10-13 01:46:35.672052] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:50.135 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:50.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:50.135 Initializing NVMe Controllers 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:50.135 00:35:50.135 real 0m0.100s 00:35:50.135 user 0m0.047s 00:35:50.135 sys 0m0.053s 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:50.135 ************************************ 00:35:50.135 END TEST nvmf_target_disconnect_tc1 00:35:50.135 ************************************ 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:50.135 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:50.393 ************************************ 00:35:50.393 START TEST nvmf_target_disconnect_tc2 00:35:50.393 ************************************ 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1763301 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1763301 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1763301 ']' 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:50.393 01:46:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.394 [2024-10-13 01:46:35.786210] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:50.394 [2024-10-13 01:46:35.786299] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:50.394 [2024-10-13 01:46:35.849563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:50.394 [2024-10-13 01:46:35.894996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:50.394 [2024-10-13 01:46:35.895056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:50.394 [2024-10-13 01:46:35.895085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:50.394 [2024-10-13 01:46:35.895097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:50.394 [2024-10-13 01:46:35.895107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:50.394 [2024-10-13 01:46:35.896593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:50.394 [2024-10-13 01:46:35.896655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:50.394 [2024-10-13 01:46:35.896721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:50.394 [2024-10-13 01:46:35.896725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.652 Malloc0 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.652 [2024-10-13 01:46:36.084070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.652 [2024-10-13 01:46:36.112348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1763330 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.652 01:46:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:52.550 01:46:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1763301 00:35:52.550 01:46:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 [2024-10-13 01:46:38.136546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Write completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 [2024-10-13 01:46:38.136831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.817 starting I/O failed 00:35:52.817 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 [2024-10-13 01:46:38.137149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Read completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 Write completed with error (sct=0, sc=8) 00:35:52.818 starting I/O failed 00:35:52.818 [2024-10-13 01:46:38.137479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:52.818 [2024-10-13 01:46:38.137649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.137698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.137826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.137854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.137977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.138004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.138129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.138156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.138233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.138259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.138357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.138402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.138538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.138565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.138686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.138712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.138829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.138855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.138949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.138975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.139118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.139144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.139258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.139305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.139429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.139461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.139566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.139592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.139682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.139708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.139858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.818 [2024-10-13 01:46:38.139884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.818 qpair failed and we were unable to recover it. 00:35:52.818 [2024-10-13 01:46:38.140027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.140054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.140154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.140181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.140303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.140329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.140447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.140478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.140579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.140606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.140732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.140792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.140923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.140951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.141093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.141123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.141241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.141267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.141348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.141372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.141452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.141484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.141570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.141595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.141712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.141736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.141819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.141852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.141965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.141991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.142116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.142142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.142237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.142278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.142369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.142396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.142518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.142558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.142654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.142681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.142795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.142821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.142908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.142933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.143049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.143074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.143154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.143179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.143263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.143290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.143389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.143429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.143530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.143558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.143652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.143676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.143805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.143830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.143945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.143973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.144113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.144138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.144224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.144252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.144407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.144447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.144558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.144586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.144671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.144697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.144811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.819 [2024-10-13 01:46:38.144836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.819 qpair failed and we were unable to recover it. 00:35:52.819 [2024-10-13 01:46:38.144975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.145000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.145088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.145113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.145249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.145293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.145374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.145399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.145520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.145552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.145639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.145666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.145745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.145771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.145912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.145938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.146052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.146079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.146173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.146213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.146340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.146367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.146453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.146495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.147398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.147439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.147561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.147590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.147712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.147739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.147858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.147885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.148042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.148071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.148220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.148251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.148382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.148408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.148514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.148541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.148634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.148660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.148771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.148810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.148902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.148928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.149023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.149065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.149211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.149257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.149392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.149420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.149563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.149589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.149696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.149721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.149872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.149900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.150031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.150059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.150213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.150259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.150361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.150402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.150521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.150560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.150657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.150685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.150856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.150886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.151019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.151046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.151163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.151190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.820 qpair failed and we were unable to recover it. 00:35:52.820 [2024-10-13 01:46:38.151271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.820 [2024-10-13 01:46:38.151296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.151402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.151441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.151539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.151565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.151683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.151708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.151799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.151822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.151908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.151949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.152079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.152121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.152228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.152258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.152345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.152370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.152499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.152545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.152626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.152652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.152744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.152778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.152889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.152914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.152992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.153034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.153151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.153195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.153312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.153337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.153444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.153485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.153579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.153604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.153686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.153712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.153842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.153867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.153953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.153979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.154079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.154108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.154214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.154253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.154402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.154430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.154838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.154879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.155009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.155038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.155175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.155203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.155298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.155328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.155436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.155481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.155597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.155623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.155748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.155783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.155896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.155922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.156001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.156026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.156130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.156156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.156255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.156300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.156431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.156479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.156575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.156603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.156683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.156709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.821 qpair failed and we were unable to recover it. 00:35:52.821 [2024-10-13 01:46:38.156826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.821 [2024-10-13 01:46:38.156853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.156995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.157022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.157182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.157249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.157386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.157412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.157542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.157572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.157691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.157717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.157853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.157881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.158057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.158110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.158308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.158334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.158415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.158441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.158604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.158629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.158717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.158742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.158845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.158870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.158966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.158994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.159151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.159176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.159293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.159318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.159434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.159477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.159597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.159624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.159723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.159751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.159900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.159928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.160087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.160116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.160242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.160269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.160371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.160399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.160536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.160577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.160719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.160768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.160906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.160949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.161074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.161104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.161259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.161286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.161399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.161425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.161551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.161578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.161668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.161695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.161808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.161845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.161963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.822 [2024-10-13 01:46:38.161991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.822 qpair failed and we were unable to recover it. 00:35:52.822 [2024-10-13 01:46:38.162112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.162152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.162271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.162299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.162415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.162442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.162603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.162636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.162774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.162806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.162931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.162961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.163092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.163122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.163243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.163273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.163395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.163425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.163579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.163608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.163711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.163740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.163848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.163876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.164056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.164105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.164187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.164214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.164353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.164380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.164501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.164531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.164640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.164679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.164795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.164835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.164973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.165003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.165195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.165252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.165360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.165385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.165463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.165496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.165653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.165680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.165829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.165859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.165970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.165996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.166103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.166133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.166219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.166248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.166387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.166415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.166540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.166568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.166679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.166705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.166858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.166886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.167025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.167050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.167164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.167189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.167333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.167359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.167455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.167490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.167573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.167600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.167726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.167777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.167922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.167949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.168110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.168140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.823 qpair failed and we were unable to recover it. 00:35:52.823 [2024-10-13 01:46:38.168287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.823 [2024-10-13 01:46:38.168318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.168418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.168445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.168577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.168604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.168690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.168718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.168934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.168983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.169167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.169219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.169312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.169339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.169454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.169490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.169601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.169626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.169727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.169774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.169906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.169973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.170116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.170142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.170222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.170247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.170363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.170390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.170504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.170529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.170726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.170755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.170905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.170932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.171092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.171121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.171306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.171352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.171448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.171481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.171602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.171629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.171715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.171742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.171877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.171965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.172196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.172223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.172339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.172366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.172486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.172514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.172633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.172660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.172811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.172838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.172978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.173005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.173124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.173151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.173241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.173268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.173373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.173414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.173547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.173577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.173747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.173797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.173949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.173997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.174162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.174222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.174308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.824 [2024-10-13 01:46:38.174337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.824 qpair failed and we were unable to recover it. 00:35:52.824 [2024-10-13 01:46:38.174486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.174513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.174612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.174639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.174768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.174798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.174994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.175020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.175170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.175197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.175355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.175384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.175503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.175530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.175615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.175647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.175814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.175857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.175989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.176019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.176167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.176211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.176296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.176323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.176434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.176477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.176569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.176597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.176748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.176781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.176910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.176954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.177069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.177096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.177213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.177240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.177328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.177355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.177476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.177506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.177619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.177646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.177785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.177812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.177909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.177936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.178048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.178074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.178189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.178215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.178328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.178355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.178493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.178520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.178600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.178627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.178740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.178776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.178919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.178946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.179056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.179083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.179192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.179219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.179347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.179387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.179491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.179521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.179641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.179669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.179793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.825 [2024-10-13 01:46:38.179820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.825 qpair failed and we were unable to recover it. 00:35:52.825 [2024-10-13 01:46:38.179943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.179982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.180076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.180104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.180255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.180282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.180395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.180421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.180515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.180543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.180682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.180708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.180832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.180859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.180948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.180975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.181113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.181145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.181273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.181303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.181459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.181495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.181592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.181640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.181776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.181817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.181943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.181972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.182087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.182114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.182199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.182226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.182310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.182337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.182420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.182447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.182544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.182572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.182716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.182742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.182846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.182874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.183027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.183054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.183178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.183205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.183299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.183341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.183462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.183495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.183612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.183640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.183763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.183792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.183884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.183913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.183995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.184023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.184138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.184179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.826 [2024-10-13 01:46:38.184267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.826 [2024-10-13 01:46:38.184292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.826 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.184405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.184431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.184534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.184575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.184676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.184705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.184825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.184852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.184982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.185012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.185199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.185244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.185355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.185382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.185486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.185519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.185628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.185658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.185809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.185835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.185976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.186021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.186184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.186212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.186323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.186349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.186514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.186543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.186721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.186747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.186859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.186884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.187007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.187035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.187160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.187187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.187361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.187386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.187496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.187522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.187604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.187632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.187746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.187773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.187949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.187976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.188118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.188144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.188253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.188280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.188394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.188421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.188517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.188545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.188658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.188684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.188810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.188837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.188989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.189018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.189134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.189161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.189251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.189295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.189381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.189407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.189499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.189526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.189617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.189642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.189755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.189795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.189901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.189934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.190062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.827 [2024-10-13 01:46:38.190091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.827 qpair failed and we were unable to recover it. 00:35:52.827 [2024-10-13 01:46:38.190261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.190306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.190423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.190450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.190571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.190598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.190712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.190740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.190878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.190905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.190988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.191014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.191123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.191149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.191237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.191264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.191365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.191406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.191527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.191559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.191678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.191704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.191795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.191820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.191902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.191927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.192037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.192063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.192145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.192173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.192321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.192351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.192486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.192533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.192686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.192716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.192838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.192867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.192985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.193015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.193171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.193202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.193365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.193393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.193535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.193562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.193707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.193734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.193847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.193874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.193988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.194015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.194110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.194138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.194228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.194255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.194372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.194400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.194486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.194514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.194628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.194655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.194766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.194793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.194907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.194935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.195073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.195100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.195241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.195268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.195383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.195411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.195517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.195546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.195664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.195689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.828 qpair failed and we were unable to recover it. 00:35:52.828 [2024-10-13 01:46:38.195852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.828 [2024-10-13 01:46:38.195878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.196018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.196044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.196181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.196206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.196301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.196333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.196453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.196506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.196627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.196655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.196802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.196848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.197055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.197082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.197177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.197207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.197372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.197400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.197531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.197572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.197694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.197746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.197878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.197921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.198033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.198060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.198172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.198198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.198290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.198317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.198488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.198527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.198619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.198646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.198780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.198811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.198911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.198939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.199095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.199134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.199255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.199283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.199426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.199454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.199581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.199609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.199757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.199784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.199933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.199960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.200078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.200105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.200236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.200266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.200417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.200446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.200597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.200625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.200775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.200804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.200887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.200915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.201078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.201122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.201215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.201242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.201386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.201413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.201496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.201531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.201641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.201667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.201785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.201810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.201953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.829 [2024-10-13 01:46:38.201995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.829 qpair failed and we were unable to recover it. 00:35:52.829 [2024-10-13 01:46:38.202084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.202110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.202224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.202250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.202368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.202393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.202482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.202507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.202617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.202642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.202721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.202745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.202861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.202887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.202998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.203023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.203132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.203157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.203244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.203268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.203396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.203433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.203566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.203592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.203701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.203731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.203849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.203873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.203952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.203976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.204068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.204093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.204230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.204254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.204378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.204405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.204511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.204536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.204672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.204698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.204821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.204848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.204944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.204976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.205070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.205096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.205202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.205233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.205380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.205406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.205498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.205526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.205619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.205645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.205759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.205785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.205880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.205906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.206015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.206040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.206138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.206164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.206247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.206275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.206391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.206418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.206558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.206597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.206697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.206722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.206871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.206900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.207083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.207115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.207282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.207332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.830 [2024-10-13 01:46:38.207519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.830 [2024-10-13 01:46:38.207558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.830 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.207655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.207688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.207806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.207832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.208023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.208049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.208144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.208182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.208276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.208301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.208415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.208440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.208555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.208580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.208666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.208691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.208818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.208852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.209040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.209068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.209196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.209224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.209337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.209380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.209530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.209569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.209659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.209686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.209868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.209894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.210027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.210053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.210234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.210284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.210395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.210421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.210535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.210574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.210661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.210689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.210823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.210849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.211016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.211042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.211151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.211177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.211297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.211323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.211483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.211510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.211649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.211694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.211825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.211853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.212015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.212059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.212197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.212235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.212359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.212386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.212496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.212523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.212644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.212673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.212815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.212852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.212987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.213015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.213175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.213203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.831 qpair failed and we were unable to recover it. 00:35:52.831 [2024-10-13 01:46:38.213336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.831 [2024-10-13 01:46:38.213361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.213507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.213533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.213650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.213677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.213826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.213851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.213968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.213994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.214120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.214151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.214284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.214328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.214481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.214508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.214626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.214653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.214746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.214781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.214908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.214935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.215072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.215118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.215227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.215256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.215391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.215416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.215517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.215543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.215701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.215745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.215868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.215909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.216091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.216155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.216288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.216317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.216456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.216488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.216607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.216632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.216719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.216745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.216872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.216897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.217007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.217035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.217176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.217205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.217330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.217359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.217451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.217495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.217640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.217668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.217762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.217787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.217900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.217925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.218051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.218093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.218207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.218232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.218368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.218401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.218568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.218606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.218709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.218737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.218886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.218922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.219015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.219059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.219214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.219243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.219377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.832 [2024-10-13 01:46:38.219403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.832 qpair failed and we were unable to recover it. 00:35:52.832 [2024-10-13 01:46:38.219532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.219559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.219650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.219691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.219810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.219835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.219937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.219962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.220046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.220070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.220152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.220176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.220316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.220341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.220497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.220535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.220708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.220741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.220870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.220908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.220999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.221025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.221144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.221170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.221302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.221345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.221460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.221491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.221586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.221626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.221781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.221845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.222071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.222095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.222246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.222274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.222411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.222438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.222583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.222609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.222763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.222817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.222931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:52.833 [2024-10-13 01:46:38.222962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:52.833 qpair failed and we were unable to recover it. 00:35:52.833 [2024-10-13 01:46:38.223181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.616119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.616304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.616334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.616494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.616520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.616616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.616641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.616799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.616826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.616980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.617003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.617098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.617124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.617219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.617243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.617349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.617389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.617497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.617524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.617629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.617652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.617737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.617786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.617912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.617937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.618073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.618100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.618196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.618220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.618374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.618400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.618541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.618567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.618690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.618717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.618871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.618896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.619021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.619061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.096 qpair failed and we were unable to recover it. 00:35:53.096 [2024-10-13 01:46:38.619191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.096 [2024-10-13 01:46:38.619218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.619337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.619364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.619482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.619506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.619619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.619646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.619796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.619821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.619934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.619963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.620108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.620133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.620223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.620248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.620388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.620417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.620560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.620589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.620702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.620728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.620813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.620838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.620952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.620978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.621067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.621092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.621205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.621231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.621352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.621377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.621553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.621581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.621703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.621732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.621840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.621871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.621981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.622006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.622121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.622147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.622241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.622265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.622383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.622418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.622509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.622534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.622623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.622648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.622736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.622779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.622884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.622911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.623024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.623049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.623183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.623210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.623366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.623395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.623538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.623564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.623672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.623698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.623867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.623893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.624035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.624061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.624178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.624204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.624311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.097 [2024-10-13 01:46:38.624336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.097 qpair failed and we were unable to recover it. 00:35:53.097 [2024-10-13 01:46:38.624505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.624531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.624625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.624651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.624773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.624799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.624932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.624974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.625096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.625124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.625223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.625249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.625361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.625388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.625535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.625578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.625708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.625737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.625849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.625893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.626039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.626065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.626181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.626218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.626357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.626386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.626507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.626553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.626663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.626699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.626839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.626883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.627006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.627034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.627203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.627228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.627366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.627392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.627513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.627554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.627656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.627685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.627837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.627866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.627976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.628009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.628154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.628180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.628266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.628291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.628404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.628446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.628581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.628607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.628718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.628743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.628930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.628955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.629052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.629077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.629193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.629218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.629323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.629349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.629438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.629482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.629607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.629633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.629763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.629790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.629910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.629935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.630074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.098 [2024-10-13 01:46:38.630104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.098 qpair failed and we were unable to recover it. 00:35:53.098 [2024-10-13 01:46:38.630249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.630274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.630487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.630512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.630660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.630687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.630802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.630828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.630927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.630956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.631074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.631099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.631238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.631263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.631395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.631424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.631522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.631550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.631709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.631735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.631838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.631864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.632016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.632041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.632198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.632224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.632313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.632339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.632455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.632501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.632632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.632661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.632815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.632843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.632968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.632995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.633114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.633139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.633265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.633308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.633450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.633492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.633605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.633631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.633769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.633795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.633943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.633972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.634120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.634147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.634316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.634346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.634436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.634477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.634579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.634609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.634743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.634780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.634888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.634915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.635055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.635097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.635247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.635271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.635415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.635441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.635574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.635601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.635719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.635744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.635995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.636024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.636154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.636182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.636344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.636370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.636495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.636522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.636645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.099 [2024-10-13 01:46:38.636671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.099 qpair failed and we were unable to recover it. 00:35:53.099 [2024-10-13 01:46:38.636839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.636868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.636986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.637011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.637131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.637156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.637320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.637349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.637504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.637534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.637645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.637671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.637792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.637818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.637907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.637932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.638050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.638075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.638187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.638213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.638327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.638353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.638493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.638522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.638626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.638668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.638792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.638817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.638943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.638969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.639077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.639122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.639266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.639291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.639403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.639428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.639574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.639617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.639754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.639782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.639902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.639928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.640039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.640065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.640177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.640203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.640344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.640369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.640449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.640487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.640626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.640656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.640798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.640823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.640959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.640988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.641165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.641191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.641309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.641334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.641499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.641525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.641696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.641738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.641862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.641887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.642024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.642053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.642164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.642206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.100 [2024-10-13 01:46:38.642364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.100 [2024-10-13 01:46:38.642389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.100 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.642520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.642547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.642661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.642686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.642837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.642862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.643003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.643032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.643150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.643177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.643282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.643308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.643398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.643425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.643523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.643548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.643640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.643666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.643749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.643780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.643891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.643918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.644052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.644081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.644211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.644241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.644376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.644401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.644505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.644532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.644709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.644735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.644902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.644928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.645045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.645070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.645181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.645207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.645302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.645330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.645453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.645487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.645605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.645631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.645779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.645805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.645932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.645960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.646075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.646103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.646215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.646241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.646383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.646408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.646572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.646598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.646711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.646736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.646828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.646857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.646975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.647001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.101 [2024-10-13 01:46:38.647084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.101 [2024-10-13 01:46:38.647109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.101 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.647227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.647253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.647338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.647363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.647450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.647487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.647646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.647674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.647798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.647826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.647953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.647978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.648089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.648115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.648252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.648295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.648377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.648404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.648554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.648580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.648668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.648694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.648853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.648881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.648971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.649014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.649157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.649184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.649289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.649314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.649455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.649505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.649655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.649683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.649852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.649877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.649986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.650028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.650151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.650180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.650304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.650332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.650433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.650476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.650619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.650644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.650747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.650784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.650913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.650956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.651074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.651099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.651213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.651239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.651353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.651379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.651604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.651633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.651778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.651804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.651917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.651942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.652107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.652135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.652228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.652255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.652414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.652440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.652568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.652609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.652704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.652733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.652885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.652911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.653048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.653078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.102 [2024-10-13 01:46:38.653191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.102 [2024-10-13 01:46:38.653217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.102 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.653331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.653357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.653508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.653536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.653642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.653668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.653788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.653813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.653941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.653971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.654124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.654152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.654314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.654340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.654485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.654512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.654671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.654700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.654861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.654889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.655030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.655057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.655197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.655223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.655377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.655405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.655554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.655581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.655732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.655768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.655882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.655925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.656057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.656086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.656213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.656241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.656348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.656374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.656496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.656523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.656639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.656665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.656752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.656785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.656943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.656969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.657082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.657108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.657236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.657278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.657385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.657411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.657540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.657566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.657678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.657703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.657832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.657858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.657953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.657977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.658086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.658112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.658197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.658223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.658338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.658364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.103 qpair failed and we were unable to recover it. 00:35:53.103 [2024-10-13 01:46:38.658493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.103 [2024-10-13 01:46:38.658521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.658630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.658656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.658807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.658832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.658998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.659028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.659120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.659150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.659305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.659336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.659431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.659457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.659558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.659583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.659689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.659716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.659799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.659824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.659927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.659953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.660082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.660110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.660232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.660261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.660427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.660453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.660554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.660579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.660735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.660764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.660911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.660936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.661020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.661046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.661189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.661214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.661347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.661375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.661512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.661538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.661630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.661655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.661792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.661818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.661957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.661985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.662109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.662137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.662299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.662325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.662410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.662436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.662571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.662600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.662724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.662753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.662864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.662889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.663004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.663030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.663152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.663193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.663306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.663333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.663421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.663445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.663553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.663580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.663725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.663751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.663898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.663927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.664031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.104 [2024-10-13 01:46:38.664057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.104 qpair failed and we were unable to recover it. 00:35:53.104 [2024-10-13 01:46:38.664174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.664200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.664316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.664342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.664429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.664456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.664596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.664622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.664732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.664758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.664897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.664925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.665051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.665080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.665223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.665254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.665396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.665422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.665536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.665565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.665688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.665717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.665823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.665848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.665965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.665990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.666163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.666192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.666313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.666342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.666456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.666486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.666602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.666628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.666779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.666806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.666930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.666958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.667098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.667123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.667231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.667256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.667391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.667419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.667570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.667596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.667685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.667712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.667812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.667838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.667976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.668002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.668106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.668134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.668247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.668273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.668362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.668389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.668477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.668519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.668644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.668673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.668801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.668827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.668965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.668990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.669126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.105 [2024-10-13 01:46:38.669154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.105 qpair failed and we were unable to recover it. 00:35:53.105 [2024-10-13 01:46:38.669257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.106 [2024-10-13 01:46:38.669285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.106 qpair failed and we were unable to recover it. 00:35:53.106 [2024-10-13 01:46:38.669395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.106 [2024-10-13 01:46:38.669421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.106 qpair failed and we were unable to recover it. 00:35:53.106 [2024-10-13 01:46:38.669511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.106 [2024-10-13 01:46:38.669537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.106 qpair failed and we were unable to recover it. 00:35:53.106 [2024-10-13 01:46:38.669655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.106 [2024-10-13 01:46:38.669681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.106 qpair failed and we were unable to recover it. 00:35:53.377 [2024-10-13 01:46:38.669791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.377 [2024-10-13 01:46:38.669819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.377 qpair failed and we were unable to recover it. 00:35:53.377 [2024-10-13 01:46:38.669983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.377 [2024-10-13 01:46:38.670009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.377 qpair failed and we were unable to recover it. 00:35:53.377 [2024-10-13 01:46:38.670149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.670175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.670313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.670356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.670478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.670504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.670620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.670646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.670784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.670810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.670927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.670952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.671069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.671094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.671202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.671232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.671341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.671366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.671486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d52ab0 is same with the state(6) to be set 00:35:53.378 [2024-10-13 01:46:38.671678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.671719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.671820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.671848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.671941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.671969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.672087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.672116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.672253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.672280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.672397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.672425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.672550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.672587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.672686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.672726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.672847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.672875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.673018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.673046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.673158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.673185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.673299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.673331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.673462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.673499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.673611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.673638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.673780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.673806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.673920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.673948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.674088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.674115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.674252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.674279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.674434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.674482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.674606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.674634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.674752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.674777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.674897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.674922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.675033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.675058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.675170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.675195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.378 qpair failed and we were unable to recover it. 00:35:53.378 [2024-10-13 01:46:38.675355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.378 [2024-10-13 01:46:38.675384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.675519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.675550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.675706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.675734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.675900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.675932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.676085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.676131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.676260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.676305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.676393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.676420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.676553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.676583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.676694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.676722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.676836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.676880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.677023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.677050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.677138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.677165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.677282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.677309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.677530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.677562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.677694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.677723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.677849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.677876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.677969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.677998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.678182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.678249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.678374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.678403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.678538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.678565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.678682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.678707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.678846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.678875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.679039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.679066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.679216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.679257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.679378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.679402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.679524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.679550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.679689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.679714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.679831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.679866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.680097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.680126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.680280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.680308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.680393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.680421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.680583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.680624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.680729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.680790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.680930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.680961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.681089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.681119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.681299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.681356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.681458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.681496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.681647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.681674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.379 [2024-10-13 01:46:38.681787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.379 [2024-10-13 01:46:38.681813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.379 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.681899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.681925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.682039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.682070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.682270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.682313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.682452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.682482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.682593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.682618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.682742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.682770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.682880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.682904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.683009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.683043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.683162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.683191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.683309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.683338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.683462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.683510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.683649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.683675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.683800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.683844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.684003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.684047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.684229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.684258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.684378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.684413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.684558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.684585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.684676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.684702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.684828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.684854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.684998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.685027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.685180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.685209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.685304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.685348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.685465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.685498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.685583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.685609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.685707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.685748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.685871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.685899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.686053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.686083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.686221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.686248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.686340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.686367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.686480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.686520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.686619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.686646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.686786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.686812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.686953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.686978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.687083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.687134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.687294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.687337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.687479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.687512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.687648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.687678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.380 qpair failed and we were unable to recover it. 00:35:53.380 [2024-10-13 01:46:38.687817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.380 [2024-10-13 01:46:38.687862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.688094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.688142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.688262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.688289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.688409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.688436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.688529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.688556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.688662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.688703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.688818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.688889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.689091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.689145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.689241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.689271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.689403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.689430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.689552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.689583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.689712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.689756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.689872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.689915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.690033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.690062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.690192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.690220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.690345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.690372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.690477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.690506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.690655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.690683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.690818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.690848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.691009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.691038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.691225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.691254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.691410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.691435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.691576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.691602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.691759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.691788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.691896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.691939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.692026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.692055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.692214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.692242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.692409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.692434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.692576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.692605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.692691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.692718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.692899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.692926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.693036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.693062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.693235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.693281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.693431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.693477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.693627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.693655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.381 [2024-10-13 01:46:38.693746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.381 [2024-10-13 01:46:38.693771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.381 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.693886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.693911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.694064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.694126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.694251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.694279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.694439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.694477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.694651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.694681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.694814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.694862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.694949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.694977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.695117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.695144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.695261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.695287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.695404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.695436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.695565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.695593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.695680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.695707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.695851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.695877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.695987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.696013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.696130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.696157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.696241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.696268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.696386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.696413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.696582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.696623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.696759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.696787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.696868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.696894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.696968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.696994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.697133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.697159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.697270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.697295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.697442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.697477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.697620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.697647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.697761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.697788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.697899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.697926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.698046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.698072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.698180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.698207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.698304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.698330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.698418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.698444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.382 [2024-10-13 01:46:38.698570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.382 [2024-10-13 01:46:38.698597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.382 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.698687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.698714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.698808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.698835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.698945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.698972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.699092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.699118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.699238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.699279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.699407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.699447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.699580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.699608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.699724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.699750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.699887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.699912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.699994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.700020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.700212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.700242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.700344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.700372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.700455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.700510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.700642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.700671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.700791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.700820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.700943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.700971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.701091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.701119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.701242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.701277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.701419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.701449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.701582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.701611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.701750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.701794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.701902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.701929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.702083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.702127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.702263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.702290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.702407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.702435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.702552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.702592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.702700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.702760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.702975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.703032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.703216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.703273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.703405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.703432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.703538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.703565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.703653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.703680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.703769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.703797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.703895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.703924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.704171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.704228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.704362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.704388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.704526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.704554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.383 [2024-10-13 01:46:38.704639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.383 [2024-10-13 01:46:38.704664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.383 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.704749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.704793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.704910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.704938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.705061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.705092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.705223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.705253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.705393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.705420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.705533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.705560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.705662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.705701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.705807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.705837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.705991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.706021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.706143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.706172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.706349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.706378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.706547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.706574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.706665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.706710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.706861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.706890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.707093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.707119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.707252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.707281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.707406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.707432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.707567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.707594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.707712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.707738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.707846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.707872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.707971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.707998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.708134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.708163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.708294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.708323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.708414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.708459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.708629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.708670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.708763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.708792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.708885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.708913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.709048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.709079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.709267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.709324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.709456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.709504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.709601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.709629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.709756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.709785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.709879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.709904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.710046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.710075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.710203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.710232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.710396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.710422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.710537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.710564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.710679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.710706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.710842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.710872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.384 [2024-10-13 01:46:38.710965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.384 [2024-10-13 01:46:38.710995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.384 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.711119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.711161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.711254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.711295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.711398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.711427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.711552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.711580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.711726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.711756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.711893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.711937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.712128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.712178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.712308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.712337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.712501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.712527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.712669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.712696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.712811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.712839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.712993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.713063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.713199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.713241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.713370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.713405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.713551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.713578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.713692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.713718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.713841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.713870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.714104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.714151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.714299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.714328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.714438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.714464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.714600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.714640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.714793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.714839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.714952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.714984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.715116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.715146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.715272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.715317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.715398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.715426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.715576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.715605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.715698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.715724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.715801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.715845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.716033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.716059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.716144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.716170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.716291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.716317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.716403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.716429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.716559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.716590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.716709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.716737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.716867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.716897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.717034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.717080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.385 qpair failed and we were unable to recover it. 00:35:53.385 [2024-10-13 01:46:38.717211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.385 [2024-10-13 01:46:38.717241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.717348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.717374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.717517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.717545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.717633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.717660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.717750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.717776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.717862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.717889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.718032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.718059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.718199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.718225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.718353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.718393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.718511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.718557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.718703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.718735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.718853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.718884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.718985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.719015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.719111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.719140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.719295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.719325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.719478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.719537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.719629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.719657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.719808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.719838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.719928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.719959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.720107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.720155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.720330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.720376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.720461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.720499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.720615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.720642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.720778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.720826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.720967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.721012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.721182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.721247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.721356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.721387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.721530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.721558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.721695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.721724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.721850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.721880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.722012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.722042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.722146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.722174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.722315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.722341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.722452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.722484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.722595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.722621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.722701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.722727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.722892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.722920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.723105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.386 [2024-10-13 01:46:38.723134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.386 qpair failed and we were unable to recover it. 00:35:53.386 [2024-10-13 01:46:38.723227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.723256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.723381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.723410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.723546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.723572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.723694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.723720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.723849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.723878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.724028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.724057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.724185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.724214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.724333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.724362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.724491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.724518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.724633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.724660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.724790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.724820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.725015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.725065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.725150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.725184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.725337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.725367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.725465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.725497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.725618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.725644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.725756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.725782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.725894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.725936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.726089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.726118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.726243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.726272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.726371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.726397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.726527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.726568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.726699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.726741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.726920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.726951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.727049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.727081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.727176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.727207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.727341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.727372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.727541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.727569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.727710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.727737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.727821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.727848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.727968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.727995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.728076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.728119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.728310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.728338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.387 [2024-10-13 01:46:38.728466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.387 [2024-10-13 01:46:38.728525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.387 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.728649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.728676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.728789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.728815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.728955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.728984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.729088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.729117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.729244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.729272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.729436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.729480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.729625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.729653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.729772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.729799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.729916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.729943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.730024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.730051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.730152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.730191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.730328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.730358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.730481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.730525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.730640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.730666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.730775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.730801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.730912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.730938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.731102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.731148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.731260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.731287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.731430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.731481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.731615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.731643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.731792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.731818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.731943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.731989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.732137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.732187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.732295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.732321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.732458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.732495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.732627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.732654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.732769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.732795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.732878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.732905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.733050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.733077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.733239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.733269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.733401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.733430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.733582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.733611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.733709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.733754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.733871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.733918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.734051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.734096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.734225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.734254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.734420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.734447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.388 qpair failed and we were unable to recover it. 00:35:53.388 [2024-10-13 01:46:38.734582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.388 [2024-10-13 01:46:38.734610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.734695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.734721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.734888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.734917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.735031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.735057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.735237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.735291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.735414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.735443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.735607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.735634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.735720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.735746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.735883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.735912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.736097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.736155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.736257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.736288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.736391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.736432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.736600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.736640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.736747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.736778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.736925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.736955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.737186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.737234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.737322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.737350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.737491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.737518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.737608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.737651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.737782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.737808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.737927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.737953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.738060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.738086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.738245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.738279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.738365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.738408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.738512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.738553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.738648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.738675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.738761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.738787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.738968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.739011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.739229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.739259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.739346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.739374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.739524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.739552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.739670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.739696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.739834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.739864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.739983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.740012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.740102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.740132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.740259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.740288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.740388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.740417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.740516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.740542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.740632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.740658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.389 qpair failed and we were unable to recover it. 00:35:53.389 [2024-10-13 01:46:38.740740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.389 [2024-10-13 01:46:38.740785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.740873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.740902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.741026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.741056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.741179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.741207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.741321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.741361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.741497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.741557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.741687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.741720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.741825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.741857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.741952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.741982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.742080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.742110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.742242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.742272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.742431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.742457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.742551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.742579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.742691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.742717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.742820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.742864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.743001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.743030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.743154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.743183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.743332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.743362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.743533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.743559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.743695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.743720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.743842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.743871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.743994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.744023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.744149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.744191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.744354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.744381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.744477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.744505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.744622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.744648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.744773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.744801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.744986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.745014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.745107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.745135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.745274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.745334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.745428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.745468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.745598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.745626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.745790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.745819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.745919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.745945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.746061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.746087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.746177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.746222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.746376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.746417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.746560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.746601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.746755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.746782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.390 [2024-10-13 01:46:38.746873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.390 [2024-10-13 01:46:38.746899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.390 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.747011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.747037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.747142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.747169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.747285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.747313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.747438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.747486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.747610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.747638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.747763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.747791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.747913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.747958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.748067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.748101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.748197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.748228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.748350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.748393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.748533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.748566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.748709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.748736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.748855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.748883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.749000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.749027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.749129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.749172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.749355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.749399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.749508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.749553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.749666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.749694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.749845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.749872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.750012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.750039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.750173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.750222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.750368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.750398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.750554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.750594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.750691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.750719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.750929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.750980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.751096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.751144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.751295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.751342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.751442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.751479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.751619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.751646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.751761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.751787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.751916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.751945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.391 [2024-10-13 01:46:38.752041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.391 [2024-10-13 01:46:38.752072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.391 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.752160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.752190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.752336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.752362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.752443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.752475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.752558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.752585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.752719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.752760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.752845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.752879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.752985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.753016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.753189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.753248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.753409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.753449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.753626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.753658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.753824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.753879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.754112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.754166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.754314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.754343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.754437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.754491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.754597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.754627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.754723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.754753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.754840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.754869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.755006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.755057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.755244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.755293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.755390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.755420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.755553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.755579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.755681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.755711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.755814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.755843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.756005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.756033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.756160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.756188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.756389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.756417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.756542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.756567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.756692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.756733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.756858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.756887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.757042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.757104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.757220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.757247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.757413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.757443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.757589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.757618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.757731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.757757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.757917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.757961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.758132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.758175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.758318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.758345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.758433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.758461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.758583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.758610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.758728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.392 [2024-10-13 01:46:38.758755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.392 qpair failed and we were unable to recover it. 00:35:53.392 [2024-10-13 01:46:38.758836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.758862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.759012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.759068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.759241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.759292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.759425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.759454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.759589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.759629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.759753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.759800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.760017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.760044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.760233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.760280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.760376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.760404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.760529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.760573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.760685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.760711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.760867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.760895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.761021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.761050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.761189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.761230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.761381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.761409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.761575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.761604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.761713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.761744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.761872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.761902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.762073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.762107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.762283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.762313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.762463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.762514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.762593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.762619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.762713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.762752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.762866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.762897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.762999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.763026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.763188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.763234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.763378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.763405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.763545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.763575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.763768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.763796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.763941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.763968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.764110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.764141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.764295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.764327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.764421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.764456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.764560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.764605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.764741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.764771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.764871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.764901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.765064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.765094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.765252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.393 [2024-10-13 01:46:38.765282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.393 qpair failed and we were unable to recover it. 00:35:53.393 [2024-10-13 01:46:38.765409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.765438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.765558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.765588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.765750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.765794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.765928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.765970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.766060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.766087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.766227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.766253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.766393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.766419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.766551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.766595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.766730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.766760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.766908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.766952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.767064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.767091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.767234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.767262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.767378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.767405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.767514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.767542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.767656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.767683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.767826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.767853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.767949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.767980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.768160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.768208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.768297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.768324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.768468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.768505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.768606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.768636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.768790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.768834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.768969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.769012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.769137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.769166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.769279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.769306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.769433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.769484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.769734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.769765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.769890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.769919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.770061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.770090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.770215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.770245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.770397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.770426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.770547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.770607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.770732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.770763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.770862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.770895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.394 qpair failed and we were unable to recover it. 00:35:53.394 [2024-10-13 01:46:38.771077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.394 [2024-10-13 01:46:38.771120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.771275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.771303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.771453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.771487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.771609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.771637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.771793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.771838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.771933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.771978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.772056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.772083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.772167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.772195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.772349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.772388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.772528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.772569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.772660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.772689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.772836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.772886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.772980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.773011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.773138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.773168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.773333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.773363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.773482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.773539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.773632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.773660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.773818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.773848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.773963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.774012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.774147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.774189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.774308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.774351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.774468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.774500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.774607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.774633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.774768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.774796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.774890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.774922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.775018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.775048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.775204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.775251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.775397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.775424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.775568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.775609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.775703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.775730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.775881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.775926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.776044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.776094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.776226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.776274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.776403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.776444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.776594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.776622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.776740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.776769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.776870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.776915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.777036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.777066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.777196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.777226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.777321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.395 [2024-10-13 01:46:38.777352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.395 qpair failed and we were unable to recover it. 00:35:53.395 [2024-10-13 01:46:38.777491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.777523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.777662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.777692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.777818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.777847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.778055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.778085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.778183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.778214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.778347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.778379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.778521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.778552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.778640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.778669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.778797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.778842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.779029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.779057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.779173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.779200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.779343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.779371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.779446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.779479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.779590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.779633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.779790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.779840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.780018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.780067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.780210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.780258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.780350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.780379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.780525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.780570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.780681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.780726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.780858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.780886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.781009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.781039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.781178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.781207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.781305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.781334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.781468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.781540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.781641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.781669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.781799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.781844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.781932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.781965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.782096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.782141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.782237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.782278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.782404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.782432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.782552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.396 [2024-10-13 01:46:38.782580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.396 qpair failed and we were unable to recover it. 00:35:53.396 [2024-10-13 01:46:38.782666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.782693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.782814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.782842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.782965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.782992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.783098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.783127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.783253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.783283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.783434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.783463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.783573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.783599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.783716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.783742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.783853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.783882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.784008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.784037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.784130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.784173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.784314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.784343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.784481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.784508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.784648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.784674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.784798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.784827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.784937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.784964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.785139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.785171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.785300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.785330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.785463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.785495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.785616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.785643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.785728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.785755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.785888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.785918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.786034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.786069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.786193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.786222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.786383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.786409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.786492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.786519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.786628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.786654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.786820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.786849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.786975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.787004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.787129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.787158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.787317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.787377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.787503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.787531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.787622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.787650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.787763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.787791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.787920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.787949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.788079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.788109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.788246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.788275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.788384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.788411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.788540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.397 [2024-10-13 01:46:38.788567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.397 qpair failed and we were unable to recover it. 00:35:53.397 [2024-10-13 01:46:38.788709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.788735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.788907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.788940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.789136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.789183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.789278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.789307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.789435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.789462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.789631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.789658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.789737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.789763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.789875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.789901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.789985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.790012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.790124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.790166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.790385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.790421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.790586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.790627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.790750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.790778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.790907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.790936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.791089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.791118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.791286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.791342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.791453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.791485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.791624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.791651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.791737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.791763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.791862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.791891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.792076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.792143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.792264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.792292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.792408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.792448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.792585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.792626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.792756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.792785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.792920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.792951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.793159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.793189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.793300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.793327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.793446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.793480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.793579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.793606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.793696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.793725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.793843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.793870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.794002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.794032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.794122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.794152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.794270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.794315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.794452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.794496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.794616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.794643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.794806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.794837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.795010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.398 [2024-10-13 01:46:38.795058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.398 qpair failed and we were unable to recover it. 00:35:53.398 [2024-10-13 01:46:38.795154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.795184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.795279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.795324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.795456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.795505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.795635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.795663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.795750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.795777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.795895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.795921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.796075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.796109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.796237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.796263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.796403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.796432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.796566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.796592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.796708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.796734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.796894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.796923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.797030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.797059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.797207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.797236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.797353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.797382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.797475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.797505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.797617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.797643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.797761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.797787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.797926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.797956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.798141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.798169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.798264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.798297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.798400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.798444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.798610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.798651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.798793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.798840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.799004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.799049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.799165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.799199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.799304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.799335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.799461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.799511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.799605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.799632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.799719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.799746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.799862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.799889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.799976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.800009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.800183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.800231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.800334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.800374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.800500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.800529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.800618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.800647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.399 qpair failed and we were unable to recover it. 00:35:53.399 [2024-10-13 01:46:38.800810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.399 [2024-10-13 01:46:38.800854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.801006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.801036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.801166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.801202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.801343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.801370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.801492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.801519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.801664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.801691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.801835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.801884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.802059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.802108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.802301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.802328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.802443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.802479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.802621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.802652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.802746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.802775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.802880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.802915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.803026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.803057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.803153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.803184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.803326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.803353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.803483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.803512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.803627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.803672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.803805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.803836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.804047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.804076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.804203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.804231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.804360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.804389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.804528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.804557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.804699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.804729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.804860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.804890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.805004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.805031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.805154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.805185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.805329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.805356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.805479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.805509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.805607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.805635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.805783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.805827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.805985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.806014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.806130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.806159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.806314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.806343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.806468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.400 [2024-10-13 01:46:38.806523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.400 qpair failed and we were unable to recover it. 00:35:53.400 [2024-10-13 01:46:38.806641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.806668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.806790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.806821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.806998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.807049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.807177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.807207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.807310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.807338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.807430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.807458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.807586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.807612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.807767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.807802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.808005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.808053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.808192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.808242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.808339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.808368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.808516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.808557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.808707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.808747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.808881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.808912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.809037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.809066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.809212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.809239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.809388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.809419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.809571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.809599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.809689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.809717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.809807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.809834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.809985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.810011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.810160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.810190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.810346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.810375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.810467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.810519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.810632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.810658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.810772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.810798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.810907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.810937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.811045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.811072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.811186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.811227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.811342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.811368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.811512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.811539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.811655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.811682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.811811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.811842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.811947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.811975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.812100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.812126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.812249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.812278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.812410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.812437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.812536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.812564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.812712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.401 [2024-10-13 01:46:38.812738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.401 qpair failed and we were unable to recover it. 00:35:53.401 [2024-10-13 01:46:38.812944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.812973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.813097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.813125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.813246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.813275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.813400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.813427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.813521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.813547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.813688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.813714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.813863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.813890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.813982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.814026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.814178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.814207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.814344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.814374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.814501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.814543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.814686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.814715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.814828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.814854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.814945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.814988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.815208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.815262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.815416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.815442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.815559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.815585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.815704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.815730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.815852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.815878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.816022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.816048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.816157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.816188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.816310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.816354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.816449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.816511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.816628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.816654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.816774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.816800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.816914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.816958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.817082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.817111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.817255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.817299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.817423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.817463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.817562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.817590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.817668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.817695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.817841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.817867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.817980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.818024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.818178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.818208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.818359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.818388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.818484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.818529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.818621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.818647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.818742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.818786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.818913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.818941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.819068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.402 [2024-10-13 01:46:38.819096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.402 qpair failed and we were unable to recover it. 00:35:53.402 [2024-10-13 01:46:38.819193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.819221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.819341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.819375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.819484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.819530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.819620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.819648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.819738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.819764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.819856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.819899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.820028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.820057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.820183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.820213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.820319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.820364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.820510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.820547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.820640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.820669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.820810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.820838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.820940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.820970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.821094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.821124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.821243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.821272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.821435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.821504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.821599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.821628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.821761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.821792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.821896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.821924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.822010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.822038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.822157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.822185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.822309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.822338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.822434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.822465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.822612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.822639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.822729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.822756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.822845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.822872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.822958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.823003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.823107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.823136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.823261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.823290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.823415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.823442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.823530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.823557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.823670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.823696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.823815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.823845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.823998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.824026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.824151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.824182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.824275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.824306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.824404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.824434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.824553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.403 [2024-10-13 01:46:38.824582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.403 qpair failed and we were unable to recover it. 00:35:53.403 [2024-10-13 01:46:38.824661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.824703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.824805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.824835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.824935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.824963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.825083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.825112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.825266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.825295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.825467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.825497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.825692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.825722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.825886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.825930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.826063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.826108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.826194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.826222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.826307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.826334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.826448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.826487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.826653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.826698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.826823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.826870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.826997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.827027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.827154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.827181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.827324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.827350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.827468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.827503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.827620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.827650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.827748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.827777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.827868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.827897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.827997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.828025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.828153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.828182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.828304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.828332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.828491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.828520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.828691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.828721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.828884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.828913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.829150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.829207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.829348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.829375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.829492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.829520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.829625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.829655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.829802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.829845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.829982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.830026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.830180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.830207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.830355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.830382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.830478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.830505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.830627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.830654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.830791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.830835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.830959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.404 [2024-10-13 01:46:38.830987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.404 qpair failed and we were unable to recover it. 00:35:53.404 [2024-10-13 01:46:38.831105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.831133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.831213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.831239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.831368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.831409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.831560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.831590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.831724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.831755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.831872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.831902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.832057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.832087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.832293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.832323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.832452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.832511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.832640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.832670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.832769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.832809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.832946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.832990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.833167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.833220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.833335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.833362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.833484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.833512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.833595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.833623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.833811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.833855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.833959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.833989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.834154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.834207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.834336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.834362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.834480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.834507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.834617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.834643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.834792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.834820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.834940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.834969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.835097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.835126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.835275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.835304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.835464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.835503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.835616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.835643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.835770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.835800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.835893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.835923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.836058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.836088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.836204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.405 [2024-10-13 01:46:38.836234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.405 qpair failed and we were unable to recover it. 00:35:53.405 [2024-10-13 01:46:38.836385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.836415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.836582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.836622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.836728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.836758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.836885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.836913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.837029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.837058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.837180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.837208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.837305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.837336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.837476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.837508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.837622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.837648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.837728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.837754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.837910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.837938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.838084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.838112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.838235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.838263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.838388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.838419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.838591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.838618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.838737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.838765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.838883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.838910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.839043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.839073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.839171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.839200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.839323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.839353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.839484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.839525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.839656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.839685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.839826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.839870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.840003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.840033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.840181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.840227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.840324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.840351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.840488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.840516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.840609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.840634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.840740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.840768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.840887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.840916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.841077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.841129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.841282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.841311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.841431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.841460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.841687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.841713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.841814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.841843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.841996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.842068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.842168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.842196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.842321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.842349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.842476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.842521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.842631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.842656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.406 qpair failed and we were unable to recover it. 00:35:53.406 [2024-10-13 01:46:38.842748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.406 [2024-10-13 01:46:38.842778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.842919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.842962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.843091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.843122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.843232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.843259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.843400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.843427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.843551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.843578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.843712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.843743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.843857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.843908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.844014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.844047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.844182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.844209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.844326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.844354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.844511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.844551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.844639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.844667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.844770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.844801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.844992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.845022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.845206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.845233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.845317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.845343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.845454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.845489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.845600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.845626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.845711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.845738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.845950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.846004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.846178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.846236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.846360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.846391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.846567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.846598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.846723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.846753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.846883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.846913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.847075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.847105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.847240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.847270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.847408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.847435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.847539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.847567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.847649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.847694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.847794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.847824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.847975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.848005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.848105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.848135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.848260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.848289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.848444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.848482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.848601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.848628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.848758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.407 [2024-10-13 01:46:38.848803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.407 qpair failed and we were unable to recover it. 00:35:53.407 [2024-10-13 01:46:38.848937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.848982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.849106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.849201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.849314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.849341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.849479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.849508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.849604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.849661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.849771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.849802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.849887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.849917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.850093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.850122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.850301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.850359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.850478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.850510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.850627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.850654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.850733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.850780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.850900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.850929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.851046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.851074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.851177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.851206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.851334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.851360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.851513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.851554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.851691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.851721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.851855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.851886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.852041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.852071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.852152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.852182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.852384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.852414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.852538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.852566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.852694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.852737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.852909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.852985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.853166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.853221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.853308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.853337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.853460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.853511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.853622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.853648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.853786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.853828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.853926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.853955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.854110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.854137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.854279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.854321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.854475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.854519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.854636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.854662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.408 qpair failed and we were unable to recover it. 00:35:53.408 [2024-10-13 01:46:38.854758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.408 [2024-10-13 01:46:38.854787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.854944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.854985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.855122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.855153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.855282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.855312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.855446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.855483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.855651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.855678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.855760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.855787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.855933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.855961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.856068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.856099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.856200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.856231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.856356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.856384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.856558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.856598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.856724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.856752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.856884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.856930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.857064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.857109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.857228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.857255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.857342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.857370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.857488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.857516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.857645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.857673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.857790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.857819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.857941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.857967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.858081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.858108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.858197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.858224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.858342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.858370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.858498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.858526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.858673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.858700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.858784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.858810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.858909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.858938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.859099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.859128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.859249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.859278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.859384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.859413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.859557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.859601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.859735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.859767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.859960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.860028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.860260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.860314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.860416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.860446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.860579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.860606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.409 qpair failed and we were unable to recover it. 00:35:53.409 [2024-10-13 01:46:38.860737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.409 [2024-10-13 01:46:38.860783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.860985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.861015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.861183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.861236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.861325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.861352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.861493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.861526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.861612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.861640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.861759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.861786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.861942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.861969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.862056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.862083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.862172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.862199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.862310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.862337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.862439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.862485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.862608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.862637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.862766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.862800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.862932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.862962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.863142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.863197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.863348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.863378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.863520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.863548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.863698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.863744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.863909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.863953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.864080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.864110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.864240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.864267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.864354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.864384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.864504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.864532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.864632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.864659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.864750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.864778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.864862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.864890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.864981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.865007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.865151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.865177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.865328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.865354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.865448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.865481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.865619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.410 [2024-10-13 01:46:38.865665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.410 qpair failed and we were unable to recover it. 00:35:53.410 [2024-10-13 01:46:38.865835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.865885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.866071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.866122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.866208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.866237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.866359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.866385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.866501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.866529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.866689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.866743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.866901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.866945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.867038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.867064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.867211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.867238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.867381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.867407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.867541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.867581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.867721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.867753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.867875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.867911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.868092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.868143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.868281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.868311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.868440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.868488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.868591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.868623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.868721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.868749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.868881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.868912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.869147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.869203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.869342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.869369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.869486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.869514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.869671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.869711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.869836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.869881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.870005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.870034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.870183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.870213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.870316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.870360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.870481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.870508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.870606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.870634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.870730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.870756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.870868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.870904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.871006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.871036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.871157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.871186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.871286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.871329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.871431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.871462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.871619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.871645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.871744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.871772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.871888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.871918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.872000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.872045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.411 [2024-10-13 01:46:38.872189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.411 [2024-10-13 01:46:38.872236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.411 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.872328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.872356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.872526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.872583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.872719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.872747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.872883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.872913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.873089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.873142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.873319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.873367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.873488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.873534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.873619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.873647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.873841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.873871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.874088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.874146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.874242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.874269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.874386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.874415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.874509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.874535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.874659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.874684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.874812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.874841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.874974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.875028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.875137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.875166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.875305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.875334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.875459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.875491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.875585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.875613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.875747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.875793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.875886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.875913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.876108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.876135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.876264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.876292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.876432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.876459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.876628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.876673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.876813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.876859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.876980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.877007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.877113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.877153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.877277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.877305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.877395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.877422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.877554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.877585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.877711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.877741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.877874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.877903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.878049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.878080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.878243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.878274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.878438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.878487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.878644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.878673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.878817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.878848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.412 [2024-10-13 01:46:38.878951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.412 [2024-10-13 01:46:38.879000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.412 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.879140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.879170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.879312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.879341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.879459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.879492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.879609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.879636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.879742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.879771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.879907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.879951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.880052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.880097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.880216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.880244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.880337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.880363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.880526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.880562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.880702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.880729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.880850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.880887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.880985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.881015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.881148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.881195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.881330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.881370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.881482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.881523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.881655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.881684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.881855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.881902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.882031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.882061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.882225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.882255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.882364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.882391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.882500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.882527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.882613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.882639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.882788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.882823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.882952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.882981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.883181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.883242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.883364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.883397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.883487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.883515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.883635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.883661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.883840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.883884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.884059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.884107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.884193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.884219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.884308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.884336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.884454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.884486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.884598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.884624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.884733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.884763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.884891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.884921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.885098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.885151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.885328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.885376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.413 [2024-10-13 01:46:38.885498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.413 [2024-10-13 01:46:38.885560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.413 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.885698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.885743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.885939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.885993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.886183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.886212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.886343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.886370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.886489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.886517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.886651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.886681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.886786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.886826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.886919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.886949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.887152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.887207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.887333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.887365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.887455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.887518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.887642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.887671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.887822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.887865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.888010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.888045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.888201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.888260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.888394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.888421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.888540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.888567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.888649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.888675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.888797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.888837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.888930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.888959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.889085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.889114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.889213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.889241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.889404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.889444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.889579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.889613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.889702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.889745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.889935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.889989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.890098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.890128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.890261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.890290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.890403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.890444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.890557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.890598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.890684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.890713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.890840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.890873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.891085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.891140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.891371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.891425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.891545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.891573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.414 qpair failed and we were unable to recover it. 00:35:53.414 [2024-10-13 01:46:38.891672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.414 [2024-10-13 01:46:38.891701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.891806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.891836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.892051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.892110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.892277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.892337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.892457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.892492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.892596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.892624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.892716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.892744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.892912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.892942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.893127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.893181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.893337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.893367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.893477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.893507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.893625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.893652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.893770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.893798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.893937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.893994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.894122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.894167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.894445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.894554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.894653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.894682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.894784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.894811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.894939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.894974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.895092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.895154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.895338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.895363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.895493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.895520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.895650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.895679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.895768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.895796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.895959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.895990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.896178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.896234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.896363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.896393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.896524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.896551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.896669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.896696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.896815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.896847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.896989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.897039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.897177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.897210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.897327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.897356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.897481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.897526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.897669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.897696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.897829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.897860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.897976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.898005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.898123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.898152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.415 qpair failed and we were unable to recover it. 00:35:53.415 [2024-10-13 01:46:38.898304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.415 [2024-10-13 01:46:38.898336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.898475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.898521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.898615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.898642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.898793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.898820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.898931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.898958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.899106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.899154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.899333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.899372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.899505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.899537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.899646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.899672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.899877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.899936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.900038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.900068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.900197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.900228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.900373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.900400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.900510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.900537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.900678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.900706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.900849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.900880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.901003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.901032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.901184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.901215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.901324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.901353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.901490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.901530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.901654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.901682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.901802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.901831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.901991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.902022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.902113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.902143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.902245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.902275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.902454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.902508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.902631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.902660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.902796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.902841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.903005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.903051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.903211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.903265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.903384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.903414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.903505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.903533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.903696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.903728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.903879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.903942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.904156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.904208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.904333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.904362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.904495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.904524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.416 [2024-10-13 01:46:38.904636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.416 [2024-10-13 01:46:38.904667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.416 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.904819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.904849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.905036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.905065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.905199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.905226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.905312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.905339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.905452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.905492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.905609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.905636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.905722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.905749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.905873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.905899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.905988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.906016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.906157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.906189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.906300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.906328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.906418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.906444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.906604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.906648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.906787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.906820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.906917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.906947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.907070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.907133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.907292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.907321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.907410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.907436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.907543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.907571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.907681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.907710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.907807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.907837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.907933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.907963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.908142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.908189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.908286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.908313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.908431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.908459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.908553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.908579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.908679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.908708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.908849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.908898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.909029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.909055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.909148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.909174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.909264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.909291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.909434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.909461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.909583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.909623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.909738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.909765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.909893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.909920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.910018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.910044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.910181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.417 [2024-10-13 01:46:38.910212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.417 qpair failed and we were unable to recover it. 00:35:53.417 [2024-10-13 01:46:38.910326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.910353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.910475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.910502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.910584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.910611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.910713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.910742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.910873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.910919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.911030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.911075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.911190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.911216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.911343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.911383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.911524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.911553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.911699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.911726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.911898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.911954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.912119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.912170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.912264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.912293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.912443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.912477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.912558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.912585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.912714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.912771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.912931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.912979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.913136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.913192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.913333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.913363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.913486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.913515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.913624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.913683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.913823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.913854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.913967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.913994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.914072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.914098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.914235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.914264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.914396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.914426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.914623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.914657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.914793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.914822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.914948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.914977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.915100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.915152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.915287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.915316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.915422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.915463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.915577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.915606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.915767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.915797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.915966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.916018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.916149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.916200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.418 qpair failed and we were unable to recover it. 00:35:53.418 [2024-10-13 01:46:38.916350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.418 [2024-10-13 01:46:38.916378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.916465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.916500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.916584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.916613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.916725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.916771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.916901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.916941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.917086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.917132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.917289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.917321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.917430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.917456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.917582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.917610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.917706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.917736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.917848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.917875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.917982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.918009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.918094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.918122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.918244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.918302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.918439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.918467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.918629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.918656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.918811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.918868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.919050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.919110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.919254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.919302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.919454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.919497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.919609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.919635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.919785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.919828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.919965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.920007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.920227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.920255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.920388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.920417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.920531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.920559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.920672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.920698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.920816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.920843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.920946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.920976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.921090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.921132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.921280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.921342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.921514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.921541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.921639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.921666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.921794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.921821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.419 [2024-10-13 01:46:38.921963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.419 [2024-10-13 01:46:38.921992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.419 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.922149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.922201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.922327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.922356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.922480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.922524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.922646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.922673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.922777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.922805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.922974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.923019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.923131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.923178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.923269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.923299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.923463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.923501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.923598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.923625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.923740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.923767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.923911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.923938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.924082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.924112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.924309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.924371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.924482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.924512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.924612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.924659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.924764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.924793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.924939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.925004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.925181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.925231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.925360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.925391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.925541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.925569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.925671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.925712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.925803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.925837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.925985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.926036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.926154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.926207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.926315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.926342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.926453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.926486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.926631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.926662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.926839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.926871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.926991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.927044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.927219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.927248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.927373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.927404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.927563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.927591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.927675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.927703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.927838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.927886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.928023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.928083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.928176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.928202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.928279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.928306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.928456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.928490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.928587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.928614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.928735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.928770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.928910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.420 [2024-10-13 01:46:38.928937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.420 qpair failed and we were unable to recover it. 00:35:53.420 [2024-10-13 01:46:38.929089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.929116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.929255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.929282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.929401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.929429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.929542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.929583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.929707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.929736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.929881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.929912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.930010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.930043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.930177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.930208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.930311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.930343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.930459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.930493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.930613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.930641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.930739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.930768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.930859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.930886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.930987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.931016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.931134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.931179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.931290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.931317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.931426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.931453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.931596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.931622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.931729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.931764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.931882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.931909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.932018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.932052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.932143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.932170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.932279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.932305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.932395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.932423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.932536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.932566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.932655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.932682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.932798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.932825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.932931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.932973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.933109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.933153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.933298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.933326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.933459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.933511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.933635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.933663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.933748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.933775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.933913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.933943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.934112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.934162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.934252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.934283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.934418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.934454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.934585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.934613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.934731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.934770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.934876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.934905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.935065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.421 [2024-10-13 01:46:38.935094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.421 qpair failed and we were unable to recover it. 00:35:53.421 [2024-10-13 01:46:38.935232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.935276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.935399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.935428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.935543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.935571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.935684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.935711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.935815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.935844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.935941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.935967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.936136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.936173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.936338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.936368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.936586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.936612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.936729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.936771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.936863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.936892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.937069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.937097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.937232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.937264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.937394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.937424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.937555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.937596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.937763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.937795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.937974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.938030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.938149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.938203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.938302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.938329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.938413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.938440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.938555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.938583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.938677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.938704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.938831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.938858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.938980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.939006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.939093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.939121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.939265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.939293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.939381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.939408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.939496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.939523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.939619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.939646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.939738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.939771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.939882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.939909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.940036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.940062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.940156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.940183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.940277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.940304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.940392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.940419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.940516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.940543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.940655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.940699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.940845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.940897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.941039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.941107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.941195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.941221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.941301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.941328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.941427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.422 [2024-10-13 01:46:38.941467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.422 qpair failed and we were unable to recover it. 00:35:53.422 [2024-10-13 01:46:38.941574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.423 [2024-10-13 01:46:38.941603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.423 qpair failed and we were unable to recover it. 00:35:53.423 [2024-10-13 01:46:38.941694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.423 [2024-10-13 01:46:38.941720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.423 qpair failed and we were unable to recover it. 00:35:53.423 [2024-10-13 01:46:38.941812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.423 [2024-10-13 01:46:38.941839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.423 qpair failed and we were unable to recover it. 00:35:53.423 [2024-10-13 01:46:38.941916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.423 [2024-10-13 01:46:38.941951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.423 qpair failed and we were unable to recover it. 00:35:53.423 [2024-10-13 01:46:38.942069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.423 [2024-10-13 01:46:38.942101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.423 qpair failed and we were unable to recover it. 00:35:53.423 [2024-10-13 01:46:38.942222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.423 [2024-10-13 01:46:38.942248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.423 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.942358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.942397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.942493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.942520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.942593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.942620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.942737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.942774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.942892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.942918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.943042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.943069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.943241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.943270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.943366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.943395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.943530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.943557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.943703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.943732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.943844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.943883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.943983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.944012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.944179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.944230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.944350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.944378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.944531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.944572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.944708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.944740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.944854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.944908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.945016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.945044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.694 [2024-10-13 01:46:38.945164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.694 [2024-10-13 01:46:38.945191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.694 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.945293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.945322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.945410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.945438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.945551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.945579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.945665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.945693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.945787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.945815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.945938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.945983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.946106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.946137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.946265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.946303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.946425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.946455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.946566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.946611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.946766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.946796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.946925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.946955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.947096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.947126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.947246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.947289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.947406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.947434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.947575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.947603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.947692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.947721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.947835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.947866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.948083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.948133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.948239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.948275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.948409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.948453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.948558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.948585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.948696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.948723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.948893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.948927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.949131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.949180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.949341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.949371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.949506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.949552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.949644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.949670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.949754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.949789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.949924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.949953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.950177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.950227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.950353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.950383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.950516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.950560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.950697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.950738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.950836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.950869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.951018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.951063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.951178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.951206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.951334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.951375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.951491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.951532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.951630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.951658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.951804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.951834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.951960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.951990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.952104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.952133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.952243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.952289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.952425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.952451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.952553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.952580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.952703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.952754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.952912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.952957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.953065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.953091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.953215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.953242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.953323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.695 [2024-10-13 01:46:38.953358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.695 qpair failed and we were unable to recover it. 00:35:53.695 [2024-10-13 01:46:38.953497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.953524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.953608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.953635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.953715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.953743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.953868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.953895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.954044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.954073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.954163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.954189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.954335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.954364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.954460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.954492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.954602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.954629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.954724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.954760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.954861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.954889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.954977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.955004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.955101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.955129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.955245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.955272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.955360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.955390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.955515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.955544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.955658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.955684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.955799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.955825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.955988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.956014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.956102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.956128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.956215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.956241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.956354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.956380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.956486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.956519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.956626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.956656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.956793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.956822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.956953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.956983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.957083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.957111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.957192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.957224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.957402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.957428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.957552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.957579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.957664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.957706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.957842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.957871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.957957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.957986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.958115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.958144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.958244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.958273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.958401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.958428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.958539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.958565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.958654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.958680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.958819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.958849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.958976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.959005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.959136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.959168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.959293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.959322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.959416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.959445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.959568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.959594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.959721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.959758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.959931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.959963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.960062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.960093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.960191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.960226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.960361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.960390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.960553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.960582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.960724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.960758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.960855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.960884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.960984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.961017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.961116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.961144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.961314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.696 [2024-10-13 01:46:38.961344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.696 qpair failed and we were unable to recover it. 00:35:53.696 [2024-10-13 01:46:38.961437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.961466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.961574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.961599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.961691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.961717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.961834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.961878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.962075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.962104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.962227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.962256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.962407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.962439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.962593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.962619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.962719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.962744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.962866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.962891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.962992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.963020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.963146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.963174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.963310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.963339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.963469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.963500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.963616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.963661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.963786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.963815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.963944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.963982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.964108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.964138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.964260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.964303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.964435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.964490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.964621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.964649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.964728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.964761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.964863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.964890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.965038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.965067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.965175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.965203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.965293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.965320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.965424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.965451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.965551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.965577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.965660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.965687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.965775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.965801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.965965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.965991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.966108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.966134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.966242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.966267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.966378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.966404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.966544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.966572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.966669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.966695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.966785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.966811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.966900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.966928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.967073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.967099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.967228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.967256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.967386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.967421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.967530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.967559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.967679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.967724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.967884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.967914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.968017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.968045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.968141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.697 [2024-10-13 01:46:38.968172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.697 qpair failed and we were unable to recover it. 00:35:53.697 [2024-10-13 01:46:38.968273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.968300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.968417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.968444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.968566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.968606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.968760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.968789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.968942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.968981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.969093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.969121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.969214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.969240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.969335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.969361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.969460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.969499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.969620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.969646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.969740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.969776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.969910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.969939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.970058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.970086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.970188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.970216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.970360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.970387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.970504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.970554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.970655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.970682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.970775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.970801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.970925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.970953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.971086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.971111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.971266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.971292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.971389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.971429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.971554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.971584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.971748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.971773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.971920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.971948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.972054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.972080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.972193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.972221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.972344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.972371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.972455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.972493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.972593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.972635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.972743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.972802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.972947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.972975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.973100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.973126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.973207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.973233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.973340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.973366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.973454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.973497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.973590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.973616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.973704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.973730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.973828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.973867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.973980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.974005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.974107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.974148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.974245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.974273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.974378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.974423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.974529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.974558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.974656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.974683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.974779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.974805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.974896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.974924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.975040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.975067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.975192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.975219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.975303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.975330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.698 [2024-10-13 01:46:38.975416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.698 [2024-10-13 01:46:38.975442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.698 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.975548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.975575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.975691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.975718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.975808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.975834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.975916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.975943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.976025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.976051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.976173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.976202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.976322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.976348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.976484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.976512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.976598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.976625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.976739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.976765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.976851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.976878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.977003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.977038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.977155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.977181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.977276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.977304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.977408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.977434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.977560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.977586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.977672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.977698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.977806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.977834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.977966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.977997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.978096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.978127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.978229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.978258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.978347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.978391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.978538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.978565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.978659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.978704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.978812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.978841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.978942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.978972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.979104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.979134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.979278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.979303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.979397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.979424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.979591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.979619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.979698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.979725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.979812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.979854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.979990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.980045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.980164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.980210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.980368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.980395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.980496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.980547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.980673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.980717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.980880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.980931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.981068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.981116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.981230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.981263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.981376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.981431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.981560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.981589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.981722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.981772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.981935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.981979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.982117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.982163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.982287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.982313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.982401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.982427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.982546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.982577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.982712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.982737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.982839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.699 [2024-10-13 01:46:38.982865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.699 qpair failed and we were unable to recover it. 00:35:53.699 [2024-10-13 01:46:38.983009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.983035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.983117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.983142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.983221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.983247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.983339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.983365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.983502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.983529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.983624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.983650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.983731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.983757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.983850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.983876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.983980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.984026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.984141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.984168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.984262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.984287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.984385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.984411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.984547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.984573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.984682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.984712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.984813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.984841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.984943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.984975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.985115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.985143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.985251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.985284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.985407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.985435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.985613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.985639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.985746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.985774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.985898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.985926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.986056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.986089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.986222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.986252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.986370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.986399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.986540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.986566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.986694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.986723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.986829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.986857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.986989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.987018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.987184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.987213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.987316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.987357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.987446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.987482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.987570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.987595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.987689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.987733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.987812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.987841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.987948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.987982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.988072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.988100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.988188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.988216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.988373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.988401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.988543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.988570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.988654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.988680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.988785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.988822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.988952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.988981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.989102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.989136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.989244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.989268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.989395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.989422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.989509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.989535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.989613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.989655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.989825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.989854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.990010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.990039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.700 [2024-10-13 01:46:38.990134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.700 [2024-10-13 01:46:38.990177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.700 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.990302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.990328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.990421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.990447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.990544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.990570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.990658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.990683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.990770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.990795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.990915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.990940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.991055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.991081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.991227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.991254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.991334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.991359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.991449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.991504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.991603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.991632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.991715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.991747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.991868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.991895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.991985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.992012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.992160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.992186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.992300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.992326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.992437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.992463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.992569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.992597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.992678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.992704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.992831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.992857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.992951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.992977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.993096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.993122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.993222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.993249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.993332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.993360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.993491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.993519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.993640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.993667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.993757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.993785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.993934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.993960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.994061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.994088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.994213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.994250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.994368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.994395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.994532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.994561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.994669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.994714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.994868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.994926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.995067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.995094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.995234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.995260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.995396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.995425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.995567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.995599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.995703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.995754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.995857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.995899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.996041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.996067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.996183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.996209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.996321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.996346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.996491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.996522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.996633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.996659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.996747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.701 [2024-10-13 01:46:38.996775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.701 qpair failed and we were unable to recover it. 00:35:53.701 [2024-10-13 01:46:38.996873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.996899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.997016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.997051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.997156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.997180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.997298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.997324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.997432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.997457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.997562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.997589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.997679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.997705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.997792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.997818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.997903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.997929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.998075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.998110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.998231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.998256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.998345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.998370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.998462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.998495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.998602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.998627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.998715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.998740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.998824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.998849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.998935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.998961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.999034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.999077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.999232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.999257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.999402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.999427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.999612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.999637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.999752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.999787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:38.999872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:38.999898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.000008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.000041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.000157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.000183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.000270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.000297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.000497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.000524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.000629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.000655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.000844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.000877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.000982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.001012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.001193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.001222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.001334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.001362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.001458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.001511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.001627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.001653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.001799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.001829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.001964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.001994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.002149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.002177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.002285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.002310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.002442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.002487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.002608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.002634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.002739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.002765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.002894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.002919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.003055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.003083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.003198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.003223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.003318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.003344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.003480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.003517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.003664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.003705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.003855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.003884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.004003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.004031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.004169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.004196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.004292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.004319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.004417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.004444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.702 qpair failed and we were unable to recover it. 00:35:53.702 [2024-10-13 01:46:39.004568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.702 [2024-10-13 01:46:39.004595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.004713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.004739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.004828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.004855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.004964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.004991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.005085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.005115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.005241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.005267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.005395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.005421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.005579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.005610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.005727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.005753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.005841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.005867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.005973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.006001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.006165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.006193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.006294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.006322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.006461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.006498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.006595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.006621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.006712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.006740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.006930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.006973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.007103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.007132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.007220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.007248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.007416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.007444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.007576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.007604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.007726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.007753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.007926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.007970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.008110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.008155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.008251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.008278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.008405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.008433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.008583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.008610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.008723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.008748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.008841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.008866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.008965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.009010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.009158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.009190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.009351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.009377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.009482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.009520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.009617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.009646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.009796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.009828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.009916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.009945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.010037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.010065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.010193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.010219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.010345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.010384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.010494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.010533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.010626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.010652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.010733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.010760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.010872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.010898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.011025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.011052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.011140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.011168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.011311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.011338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.011422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.011448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.011551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.011580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.703 qpair failed and we were unable to recover it. 00:35:53.703 [2024-10-13 01:46:39.011706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.703 [2024-10-13 01:46:39.011732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.011883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.011909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.011996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.012022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.012112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.012138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.012231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.012259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.012416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.012443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.012526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.012552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.012662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.012687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.012779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.012803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.012885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.012912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.012998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.013027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.013152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.013178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.013271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.013297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.013427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.013454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.013579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.013608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.013735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.013760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.013848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.013874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.013997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.014021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.014106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.014132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.014257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.014282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.014375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.014401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.014506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.014532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.014625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.014650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.014754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.014779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.014858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.014883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.014978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.015017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.015118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.015148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.015290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.015329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.015482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.015510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.015605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.015631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.015753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.015787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.015910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.015937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.016058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.016083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.016172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.016198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.016321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.016348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.016465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.016495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.016583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.016609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.016701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.016726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.016818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.016844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.016944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.016970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.017119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.017147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.017272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.017298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.017416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.017442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.017551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.704 [2024-10-13 01:46:39.017578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.704 qpair failed and we were unable to recover it. 00:35:53.704 [2024-10-13 01:46:39.017700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.017727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.017828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.017854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.017978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.018004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.018145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.018171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.018268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.018293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.018378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.018404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.018560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.018586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.018674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.018700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.018825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.018851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.018937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.018963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.019117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.019143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.019257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.019283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.019374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.019400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.019521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.019548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.019668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.019695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.019783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.019808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.019907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.019934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.020037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.020064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.020146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.020173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.020285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.020310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.020421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.020447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.020553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.020591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.020699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.020738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.020888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.020916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.021004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.021031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.021123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.021149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.021276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.021316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.021456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.021491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.021608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.021636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.021747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.021785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.021894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.021920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.022017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.022043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.022130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.022157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.022271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.022297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.022380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.022406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.022504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.022551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.022666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.022698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.022790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.022816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.022958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.022985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.023080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.023107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.023220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.023247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.023345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.023372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.023507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.023536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.023654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.023680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.023771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.023797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.023909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.023935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.024030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.024056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.024175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.024202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.705 [2024-10-13 01:46:39.024286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.705 [2024-10-13 01:46:39.024312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.705 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.024452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.024490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.024585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.024612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.024730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.024758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.024866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.024892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.025017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.025045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.025176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.025210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.025312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.025340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.025432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.025459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.025615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.025645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.025751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.025790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.025888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.025915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.026006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.026032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.026117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.026155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.026270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.026297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.026384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.026411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.026516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.026555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.026643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.026670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.026793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.026821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.026911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.026937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.027022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.027048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.027166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.027192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.027307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.027335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.027477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.027516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.027723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.027763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.027847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.027881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.027970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.027995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.028105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.028130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.028355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.028386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.028498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.028538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.028636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.028664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.028780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.028807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.028890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.028935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.029042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.029091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.029209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.029237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.029381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.029408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.029530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.029557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.029651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.029679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.029777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.029804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.029899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.029927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.030024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.030051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.030134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.706 [2024-10-13 01:46:39.030160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.706 qpair failed and we were unable to recover it. 00:35:53.706 [2024-10-13 01:46:39.030256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.030283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.030435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.030480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.030603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.030632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.030747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.030773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.030878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.030907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.031002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.031028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.031159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.031203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.031290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.031317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.031436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.031467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.031561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.031588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.031701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.031727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.031843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.031871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.031975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.032007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.032152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.032182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.032298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.032326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.032412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.032439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.032572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.032600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.032712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.032737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.032839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.032898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.033033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.033064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.033151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.033179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.033269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.033297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.033388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.033417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.033535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.033563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.033647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.033673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.033800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.033828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.034008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.034058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.034192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.034220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.034347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.034373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.034466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.034500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.034586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.034612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.034728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.034755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.034864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.034889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.035004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.035034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.035801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.035832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.035983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.036010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.036100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.036126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.036214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.036240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.036329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.036355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.036446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.036477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.036597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.036623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.036709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.036735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.036859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.036885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.036980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.037007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.037103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.037130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.707 [2024-10-13 01:46:39.037266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.707 [2024-10-13 01:46:39.037295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.707 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.037437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.037477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.037573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.037598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.037710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.037736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.037853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.037879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.037986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.038012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.038099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.038125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.038238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.038264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.038374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.038404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.038504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.038531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.038609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.038634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.038750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.038776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.038866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.038893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.038971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.038998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.039092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.039118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.039226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.039252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.039365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.039391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.039497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.039524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.039646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.039672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.039785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.039812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.039936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.039963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.040047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.040073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.040203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.040242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.040373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.040400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.040507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.040534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.040624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.040650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.040757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.040784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.040870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.040898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.040983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.041010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.041137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.041174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.041302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.041330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.041447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.041482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.041578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.041604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.041723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.041749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.041839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.041865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.041955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.041982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.042081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.042106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.042209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.042236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.708 qpair failed and we were unable to recover it. 00:35:53.708 [2024-10-13 01:46:39.042386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.708 [2024-10-13 01:46:39.042412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.042509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.042534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.042653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.042679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.042763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.042788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.042930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.042955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.043062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.043088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.043287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.043314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.043399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.043425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.043546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.043572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.043652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.043677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.043791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.043823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.043918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.043944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.044058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.044083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.044210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.044235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.044324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.044349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.044435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.044462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.044563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.044591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.044687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.044712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.044812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.044837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.044928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.044953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.045048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.045072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.045158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.045185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.045321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.045354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.045477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.045517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.045630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.045669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.045774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.045800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.045924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.045950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.046075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.046101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.046214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.046241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.046329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.046355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.046487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.046517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.046641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.046667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.046755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.046782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.046879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.046906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.046987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.047014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.047097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.047123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.047211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.047238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.047362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.047395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.047483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.047510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.047597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.709 [2024-10-13 01:46:39.047623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.709 qpair failed and we were unable to recover it. 00:35:53.709 [2024-10-13 01:46:39.047716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.047742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.047863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.047889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.047969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.047995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.048076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.048102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.048191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.048217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.048331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.048357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.048463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.048513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.048609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.048637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.048746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.048771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.048873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.048897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.049012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.049037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.049164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.049192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.049280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.049306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.049396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.049422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.049517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.049545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.049633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.049658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.049769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.049796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.049881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.049907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.050044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.050072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.050152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.050177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.050260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.050287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.050399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.050425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.050538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.050567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.050779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.050807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.050923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.050950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.051079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.051106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.051200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.051227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.051320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.051346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.051430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.051457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.051555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.051581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.051666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.051692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.051814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.051841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.051969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.051996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.052109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.052153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.052296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.052325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.052433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.052463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.052583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.052610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.052695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.052727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.052821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.052848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.052934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.052961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.053116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.053143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.053235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.053262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.053361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.053388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.053510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.053549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.053680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.053718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.710 qpair failed and we were unable to recover it. 00:35:53.710 [2024-10-13 01:46:39.053837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.710 [2024-10-13 01:46:39.053865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.053977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.054004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.054114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.054143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.054283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.054310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.054431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.054457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.054581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.054607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.054701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.054727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.054849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.054875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.054970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.055001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.055097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.055134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.055233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.055260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.055394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.055419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.055514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.055541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.055660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.055686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.055790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.055816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.055903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.055929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.056065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.056091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.056178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.056222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.056330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.056360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.056448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.056484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.056626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.056652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.056798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.056824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.056949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.056981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.057092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.057118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.057228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.057254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.057380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.057406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.057523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.057550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.057637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.057664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.057746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.057773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.057866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.057893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.058005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.058033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.058157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.058185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.058322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.058349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.058488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.058525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.058638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.058667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.058754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.058781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.058886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.058917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.059051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.059080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.059203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.059230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.059330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.059357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.059436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.059462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.059564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.059590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.711 [2024-10-13 01:46:39.059704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.711 [2024-10-13 01:46:39.059731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.711 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.059852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.059878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.059990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.060016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.060120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.060159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.060287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.060316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.060477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.060525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.060617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.060643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.060734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.060761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.060838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.060865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.060958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.060989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.061071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.061097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.061187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.061213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.061327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.061353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.061477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.061504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.061615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.061641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.061730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.061756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.061852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.061878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.061991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.062022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.062129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.062160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.062267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.062293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.062389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.062414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.062500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.062526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.062612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.062637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.062722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.062749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.062903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.062929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.063070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.063099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.063207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.063236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.063337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.063365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.063507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.063534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.063648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.063674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.063765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.063791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.063909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.063938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.064081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.064110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.064211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.064242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.065064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.065094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.065272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.065316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.065547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.065573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.065655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.065681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.065773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.065800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.065878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.065903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.712 qpair failed and we were unable to recover it. 00:35:53.712 [2024-10-13 01:46:39.066022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.712 [2024-10-13 01:46:39.066048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.066148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.066177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.066299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.066327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.066410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.066436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.066531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.066562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.066656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.066681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.066768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.066793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.066894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.066933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.067028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.067055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.067186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.067215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.067347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.067375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.067485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.067528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.067618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.067644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.067734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.067762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.067889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.067915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.068006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.068031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.068155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.068183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.068279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.068307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.068402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.068430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.068583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.068613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.068701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.068728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.068845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.068873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.068968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.068994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.069099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.069128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.069266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.069293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.069387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.069414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.069517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.069544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.069657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.069683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.069767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.069793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.069890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.069916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.070008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.070033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.070127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.070167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.070270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.070309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.070428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.070456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.070557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.070583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.070670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.070696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.070782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.070807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.070892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.070918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.713 qpair failed and we were unable to recover it. 00:35:53.713 [2024-10-13 01:46:39.071013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.713 [2024-10-13 01:46:39.071043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.071164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.071195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.071292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.071319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.071436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.071462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.071573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.071607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.071696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.071723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.071824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.071858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.071965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.071993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.072110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.072137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.072256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.072283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.072401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.072428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.072521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.072551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.072644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.072671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.072765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.072791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.072887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.072913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.073024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.073050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.073134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.073159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.073292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.073320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.073406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.073444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.073555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.073590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.073723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.073750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.073840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.073865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.073950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.073975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.074057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.074092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.074188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.074213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.074308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.074333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.074443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.074468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.074591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.074620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.074713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.074740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.074847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.074875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.074966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.074992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.075132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.075157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.075250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.075278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.075369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.075396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.075499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.075525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.075638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.075663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.075784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.075809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.075898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.075923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.714 qpair failed and we were unable to recover it. 00:35:53.714 [2024-10-13 01:46:39.076070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.714 [2024-10-13 01:46:39.076095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.076247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.076285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.076403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.076430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.076567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.076594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.076703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.076728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.076850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.076875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.076962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.077006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.077141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.077168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.077285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.077311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.077437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.077466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.077587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.077614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.077693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.077723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.077825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.077852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.077968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.077993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.078130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.078156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.078243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.078268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.078349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.078376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.078466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.078497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.078580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.078606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.078702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.078727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.078869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.078895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.078975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.079001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.079090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.079120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.079205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.079232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.079343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.079370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.079503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.079529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.079621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.079646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.079732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.079758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.079873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.079899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.079991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.080018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.080132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.080157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.080244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.080269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.080386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.080414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.080549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.080579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.080671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.080698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.080784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.080809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.080928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.080953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.081059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.081085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.081170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.081196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.081283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.081309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.081406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.081434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.081532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.715 [2024-10-13 01:46:39.081559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.715 qpair failed and we were unable to recover it. 00:35:53.715 [2024-10-13 01:46:39.081655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.081684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.081816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.081842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.081955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.081984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.082107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.082134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.082215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.082242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.082327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.082355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.082440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.082465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.082566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.082598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.082684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.082713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.082822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.082848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.082936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.082961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.083099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.083124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.083214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.083240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.083324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.083349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.083494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.083520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.083637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.083662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.083778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.083804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.083887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.083913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.084004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.084029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.084160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.084199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.084327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.084355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.084440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.084466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.084561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.084587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.084673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.084699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.084827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.084853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.084978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.085003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.085098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.085124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.085229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.085255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.085374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.085401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.085507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.085534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.085629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.085655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.085739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.085765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.085860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.085886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.085991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.086016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.086132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.086163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.086270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.086297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.086416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.086445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.086580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.716 [2024-10-13 01:46:39.086608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.716 qpair failed and we were unable to recover it. 00:35:53.716 [2024-10-13 01:46:39.086723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.086750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.086895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.086921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.087034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.087060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.087180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.087205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.087295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.087320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.087435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.087460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.087580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.087605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.087695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.087720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.087809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.087835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.087942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.087967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.088082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.088107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.088238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.088278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.088405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.088434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.088549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.088579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.088672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.088698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.088861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.088886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.088969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.088994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.089086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.089112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.089195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.089220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.089350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.089389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.089526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.089555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.089671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.089699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.089850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.089877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.090023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.090054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.090171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.090197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.090293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.090320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.090405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.090431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.090546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.090572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.090655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.090680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.090757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.090782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.090875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.090900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.091093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.717 [2024-10-13 01:46:39.091118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.717 qpair failed and we were unable to recover it. 00:35:53.717 [2024-10-13 01:46:39.091230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.091257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.091372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.091398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.091521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.091549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.091635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.091661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.091806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.091831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.092009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.092040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.092156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.092186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.092324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.092350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.092476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.092502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.092587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.092613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.092701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.092726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.092857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.092884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.093057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.093082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.093174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.093199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.093302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.093341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.093440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.093467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.093593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.093618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.093706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.093731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.093827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.093857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.094001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.094027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.094189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.094215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.094301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.094326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.094407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.094432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.094520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.094545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.094657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.094683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.094796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.094822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.094961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.094989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.095126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.095154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.095295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.095323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.095462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.095500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.095597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.095624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.095768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.095794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.095911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.095938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.096026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.096052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.096143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.096185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.096323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.096367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.096507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.096533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.096648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.096673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.096774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.096804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.718 [2024-10-13 01:46:39.096957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.718 [2024-10-13 01:46:39.096991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.718 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.097127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.097156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.097271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.097298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.097388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.097416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.097521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.097548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.097632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.097658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.097740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.097766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.097906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.097932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.098051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.098076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.098188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.098214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.098305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.098330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.098440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.098465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.098556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.098581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.098720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.098745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.098842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.098870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.098977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.099005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.099099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.099125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.099203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.099229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.099338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.099365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.099485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.099523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.099606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.099633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.099785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.099810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.099926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.099953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.100033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.100059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.100133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.100160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.100245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.100271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.100358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.100384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.100510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.100536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.100678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.100703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.100801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.100825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.100907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.100934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.101022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.101048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.101138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.101164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.101280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.101306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.101455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.101486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.101599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.101624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.101735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.101762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.101881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.101907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.102008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.102047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.102147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.102174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.102259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.102285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.102365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.102390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.719 [2024-10-13 01:46:39.102513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.719 [2024-10-13 01:46:39.102539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.719 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.102615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.102640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.102719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.102743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.102882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.102907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.102997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.103027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.103116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.103141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.103244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.103282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.103400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.103428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.103531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.103559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.103647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.103673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.103821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.103847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.103925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.103950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.104072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.104098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.104245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.104289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.104425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.104476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.104595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.104622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.104742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.104786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.104904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.104954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.105097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.105124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.105235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.105260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.105418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.105457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.105591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.105618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.105740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.105766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.105893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.105920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.106005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.106030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.106143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.106169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.106283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.106310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.106393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.106421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.106577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.106604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.106712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.106741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.106857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.106886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.106982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.107012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.107172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.107203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.107338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.107364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.107450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.107481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.107611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.107655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.107776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.107801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.107891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.107918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.108005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.108031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.108144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.108170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.720 qpair failed and we were unable to recover it. 00:35:53.720 [2024-10-13 01:46:39.108258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.720 [2024-10-13 01:46:39.108284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.108389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.108415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.108518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.108556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.108658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.108686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.108848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.108897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.108977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.109004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.109088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.109114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.109191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.109217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.109326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.109353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.109485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.109525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.109652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.109679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.109812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.109841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.109979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.110027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.110145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.110178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.110392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.110417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.110553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.110582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.110679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.110710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.110854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.110880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.111000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.111026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.111115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.111159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.111292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.111317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.111404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.111430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.111563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.111590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.111697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.111725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.111846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.111888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.111978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.112003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.112139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.112167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.112304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.112329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.112448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.112480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.112574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.112600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.112684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.112710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.112825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.112855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.112958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.113002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.721 qpair failed and we were unable to recover it. 00:35:53.721 [2024-10-13 01:46:39.113091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.721 [2024-10-13 01:46:39.113116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.113205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.113248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.113394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.113423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.113522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.113550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.113652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.113681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.113838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.113882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.113978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.114005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.114119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.114168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.114310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.114342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.114433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.114460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.114564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.114590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.114683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.114713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.114836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.114863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.114947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.114974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.115070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.115103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.115218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.115244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.115379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.115418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.115526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.115554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.115642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.115667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.115787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.115816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.115910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.115940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.116093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.116139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.116244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.116270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.116384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.116411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.116491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.116517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.116628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.116675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.116774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.116803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.116950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.116997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.117138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.117182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.117295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.117324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.117480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.117507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.117603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.117628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.117706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.117733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.117861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.117887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.118004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.118035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.118136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.118163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.118280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.118307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.118385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.118411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.118526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.118559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.118686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.722 [2024-10-13 01:46:39.118717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.722 qpair failed and we were unable to recover it. 00:35:53.722 [2024-10-13 01:46:39.118841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.118869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.118962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.118988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.119102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.119127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.119217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.119242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.119318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.119344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.119467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.119499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.119591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.119616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.119700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.119725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.119812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.119838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.119951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.119998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.120078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.120104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.120188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.120216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.120331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.120357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.120486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.120519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.120637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.120663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.120776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.120802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.120955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.120982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.121096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.121122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.121246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.121273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.121385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.121411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.121545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.121575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.121708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.121734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.121845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.121877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.121968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.121994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.122077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.122104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.122243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.122269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.122389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.122414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.122506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.122532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.122647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.122672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.122779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.122805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.122943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.122999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.123093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.123120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.123216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.123243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.123327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.123354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.123525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.123568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.723 [2024-10-13 01:46:39.123674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.723 [2024-10-13 01:46:39.123708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.723 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.123837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.123866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.124002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.124031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.124167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.124193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.124311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.124350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.124482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.124513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.124645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.124689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.124786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.124815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.124993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.125037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.125144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.125173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.125306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.125337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.125432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.125460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.125582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.125612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.125739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.125767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.125885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.125911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.126007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.126037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.126129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.126157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.126284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.126314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.126429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.126455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.126594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.126620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.126730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.126755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.126874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.126899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.127037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.127076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.127226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.127258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.127417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.127443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.127567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.127596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.127759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.127803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.127933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.127978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.128070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.128098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.128235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.128261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.128351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.128383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.128541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.128568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.128690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.128720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.128844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.128870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.128973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.128999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.129086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.129111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.129232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.129258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.724 [2024-10-13 01:46:39.129352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.724 [2024-10-13 01:46:39.129380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.724 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.129504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.129532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.129621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.129646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.129761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.129787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.129915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.129940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.130036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.130065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.130168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.130196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.130312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.130338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.130456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.130489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.130612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.130638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.130806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.130835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.130927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.130957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.131123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.131154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.131273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.131302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.131391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.131417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.131562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.131590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.131703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.131746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.131878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.131926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.132138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.132187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.132379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.132407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.132517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.132547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.132657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.132684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.132769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.132795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.132936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.132966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.133129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.133156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.133248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.133273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.133376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.133404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.133498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.133524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.133662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.133687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.133794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.133819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.133942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.133967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.134077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.134121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.134231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.134257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.134331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.134361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.134480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.134506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.134593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.134618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.134726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.134755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.134888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.134917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.135047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.135076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.135227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.135260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.725 [2024-10-13 01:46:39.135376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.725 [2024-10-13 01:46:39.135404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.725 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.135543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.135572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.135656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.135681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.135783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.135812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.135903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.135931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.136062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.136090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.136184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.136213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.136340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.136366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.136454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.136485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.136572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.136598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.136681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.136707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.136787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.136829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.136918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.136948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.137080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.137107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.137251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.137295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.137392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.137417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.137539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.137564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.137688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.137713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.137877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.137905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.138019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.138044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.138157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.138191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.138358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.138385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.138579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.138606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.138724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.138755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.138846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.138871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.139004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.139034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.139156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.139186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.139384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.726 [2024-10-13 01:46:39.139441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.726 qpair failed and we were unable to recover it. 00:35:53.726 [2024-10-13 01:46:39.139607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.139638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.139756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.139793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.139913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.139939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.140099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.140155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.140272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.140299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.140444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.140478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.140572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.140626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.140760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.140796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.141029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.141057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.141152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.141182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.141337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.141367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.141492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.141532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.141699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.141746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.141920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.141967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.142083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.142110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.142222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.142249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.142341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.142368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.142476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.142542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.142705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.142736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.142847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.142902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.143062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.143088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.143253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.143282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.143391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.143418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.143615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.143645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.143775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.143818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.143947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.143993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.144109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.144136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.144229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.144255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.144374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.144401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.144500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.144539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.144680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.144707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.144797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.144824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.144932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.144965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.145080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.145108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.145201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.145229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.145341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.727 [2024-10-13 01:46:39.145368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.727 qpair failed and we were unable to recover it. 00:35:53.727 [2024-10-13 01:46:39.145460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.145497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.145620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.145649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.145783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.145810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.145931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.145957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.146070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.146097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.146209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.146236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.146388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.146415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.146569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.146615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.146748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.146780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.146917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.146943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.147061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.147088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.147233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.147260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.147341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.147368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.147477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.147534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.147710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.147756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.147897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.147927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.148074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.148120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.148264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.148291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.148382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.148408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.148524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.148552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.148667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.148695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.148817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.148843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.148960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.148989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.149086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.149127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.149248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.149276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.149392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.149420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.149523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.149550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.149645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.149672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.149802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.149831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.149930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.149959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.150093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.150119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.150265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.150291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.150383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.150412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.150524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.150551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.150627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.150670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.150801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.150829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.728 [2024-10-13 01:46:39.151027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.728 [2024-10-13 01:46:39.151056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.728 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.151178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.151208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.151314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.151340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.151450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.151482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.151593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.151620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.151713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.151755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.151880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.151910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.152068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.152116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.152203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.152230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.152372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.152399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.152488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.152516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.152597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.152622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.152711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.152739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.152872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.152917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.153024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.153064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.153187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.153215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.153338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.153365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.153484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.153529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.153682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.153711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.153832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.153861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.154009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.154040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.154172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.154200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.154353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.154381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.154469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.154519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.154677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.154706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.154805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.154834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.154935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.154964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.155071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.155097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.155240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.155283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.155428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.155456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.155604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.155632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.155773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.155802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.155929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.155960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.156078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.156108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.156280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.156327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.156413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.156441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.156585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.156615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.156714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.156743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.156870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.156900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.729 [2024-10-13 01:46:39.156992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.729 [2024-10-13 01:46:39.157021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.729 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.157146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.157177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.157345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.157374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.157495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.157534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.157693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.157739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.157871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.157901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.158050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.158094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.158212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.158239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.158363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.158391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.158528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.158556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.158716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.158746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.158930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.158983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.159123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.159153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.159276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.159305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.159408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.159439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.159556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.159587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.159719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.159748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.159972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.160027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.160121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.160150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.160276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.160304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.160393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.160436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.160533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.160559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.160692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.160720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.160844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.160873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.161003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.161032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.161152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.161180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.161309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.161338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.161468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.161524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.161637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.161663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.161758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.161786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.161914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.161957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.162081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.162107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.162242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.162270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.162477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.730 [2024-10-13 01:46:39.162507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.730 qpair failed and we were unable to recover it. 00:35:53.730 [2024-10-13 01:46:39.162666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.162692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.162804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.162831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.163000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.163043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.163227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.163256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.163370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.163400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.163545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.163572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.163666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.163692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.163800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.163845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.163970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.164003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.164122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.164151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.164276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.164305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.164394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.164422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.164553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.164580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.164695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.164723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.164869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.164898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.165015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.165041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.165259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.165288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.165387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.165417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.165568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.165595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.165710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.165737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.165930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.165956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.166101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.166127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.166273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.166300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.166389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.166418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.166555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.166582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.166695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.166722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.166900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.166926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.167019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.167045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.167165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.167194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.167333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.167362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.167500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.167527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.167642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.167668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.167766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.167795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.167911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.731 [2024-10-13 01:46:39.167938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.731 qpair failed and we were unable to recover it. 00:35:53.731 [2024-10-13 01:46:39.168068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.168096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.168224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.168257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.168404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.168430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.168557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.168585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.168710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.168736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.168851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.168878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.168987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.169013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.169129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.169159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.169300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.169327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.169451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.169490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.169611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.169637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.169750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.169776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.169890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.169916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.170026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.170070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.170189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.170216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.170360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.170386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.170497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.170543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.170642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.170668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.170752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.170778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.170896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.170922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.171002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.171028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.171151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.171177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.171262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.171288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.171418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.171447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.171565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.171594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.171703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.171729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.171812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.171839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.171912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.171939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.172109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.172151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.172239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.172266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.172350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.172378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.172466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.172497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.172629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.172655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.172773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.172799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.172887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.172913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.173029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.173055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.173145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.173172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.732 [2024-10-13 01:46:39.173254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.732 [2024-10-13 01:46:39.173280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.732 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.173417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.173443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.173536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.173564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.173647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.173674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.173783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.173809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.173944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.173970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.174084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.174111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.174288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.174314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.174478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.174508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.174645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.174671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.174808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.174835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.174950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.174976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.175114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.175143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.175292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.175321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.175447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.175484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.175643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.175670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.175784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.175811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.175937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.175980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.176113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.176154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.176247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.176273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.176391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.176417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.176536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.176562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.176679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.176706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.176824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.176850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.176998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.177024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.177164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.177190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.177300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.177326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.177437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.177463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.177612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.177639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.177741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.177781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.177908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.177936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.178055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.178083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.178246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.178276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.178399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.178428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.178580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.178607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.178700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.178727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.178891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.178920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.179080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.179107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.733 [2024-10-13 01:46:39.179222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.733 [2024-10-13 01:46:39.179248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.733 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.179444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.179475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.179602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.179628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.179738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.179764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.179879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.179906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.180013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.180038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.180131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.180159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.180333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.180362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.180505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.180534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.180653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.180679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.180769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.180811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.180951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.180977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.181086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.181112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.181240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.181270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.181414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.181440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.181598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.181626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.181739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.181768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.181907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.181933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.182023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.182049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.182166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.182192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.182334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.182365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.182453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.182487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.182601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.182627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.182716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.182743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.182821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.182846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.182962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.182989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.183098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.183125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.183273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.183318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.183447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.183483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.183613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.183639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.183782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.183828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.183956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.183985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.184088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.184115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.184229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.184256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.184394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.184423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.184552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.184579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.184718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.184744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.184893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.184919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.734 [2024-10-13 01:46:39.185006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.734 [2024-10-13 01:46:39.185032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.734 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.185149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.185175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.185291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.185317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.185424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.185467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.185593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.185619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.185737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.185783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.185903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.185928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.186071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.186097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.186234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.186263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.186390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.186420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.186537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.186564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.186678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.186704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.186835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.186861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.186996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.187022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.187147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.187190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.187281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.187307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.187432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.187482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.187623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.187651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.187769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.187797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.187939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.187966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.188049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.188077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.188218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.188245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.188363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.188399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.188523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.188551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.188634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.188660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.188739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.188772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.188907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.188937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.189036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.189062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.189145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.189171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.189254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.189280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.189399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.189428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.189589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.189618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.189732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.189768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.189876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.189903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.190046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.190089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.190241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.190267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.190351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.190383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.190508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.190545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.190658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.190684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.190773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.735 [2024-10-13 01:46:39.190800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.735 qpair failed and we were unable to recover it. 00:35:53.735 [2024-10-13 01:46:39.190879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.190906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.191044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.191072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.191212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.191238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.191366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.191393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.191556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.191582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.191719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.191745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.191886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.191912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.192030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.192056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.192166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.192192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.192319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.192359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.192458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.192496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.192649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.192676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.192845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.192875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.192998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.193027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.193142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.193169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.193294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.193339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.193462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.193503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.193647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.193680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.193803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.193835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.193942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.193972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.194114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.194151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.194284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.194313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.194435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.194514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.194625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.194659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.194756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.194783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.194940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.194966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.736 qpair failed and we were unable to recover it. 00:35:53.736 [2024-10-13 01:46:39.195081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.736 [2024-10-13 01:46:39.195108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.195229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.195267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.195409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.195438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.195565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.195595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.195736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.195762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.195883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.195908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.196002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.196028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.196167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.196193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.196353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.196382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.196539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.196567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.196654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.196682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.196846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.196875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.196988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.197014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.197153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.197179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.197349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.197379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.197522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.197550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.197688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.197714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.197827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.197854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.197945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.197972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.198061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.198089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.198179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.198205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.198312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.198338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.198476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.198502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.198619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.198645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.198748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.198794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.198934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.198965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.199088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.199117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.199241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.199271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.199393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.199422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.199568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.199595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.199691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.199716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.199803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.199829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.199956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.199982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.200145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.200174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.200291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.200320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.200486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.200531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.200682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.200708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.200826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.200852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.200965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.201008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.737 [2024-10-13 01:46:39.201117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.737 [2024-10-13 01:46:39.201149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.737 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.201267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.201310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.201432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.201461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.201626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.201652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.201766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.201793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.201911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.201938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.202105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.202134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.202268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.202311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.202446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.202481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.202619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.202646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.202758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.202784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.202903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.202929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.203064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.203098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.203248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.203278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.203431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.203460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.203570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.203595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.203699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.203725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.203811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.203837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.203946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.203975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.204084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.204115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.204269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.204298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.204427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.204456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.204596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.204623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.204765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.204791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.204937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.205005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.205161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.205191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.205321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.205351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.205529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.205557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.205643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.205669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.205757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.205800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.205940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.205969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.206098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.206127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.206279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.206308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.206426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.206454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.206639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.206696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.206877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.206928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.207141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.207170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.207336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.207363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.207486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.207517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.207659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.207703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.207861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.738 [2024-10-13 01:46:39.207892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.738 qpair failed and we were unable to recover it. 00:35:53.738 [2024-10-13 01:46:39.207986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.208014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.208132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.208161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.208278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.208308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.208439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.208490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.208590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.208633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.208805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.208834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.208971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.208999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.209155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.209184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.209336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.209365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.209494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.209547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.209687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.209718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.209884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.209915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.210011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.210040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.210194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.210223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.210351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.210378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.210466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.210499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.210613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.210639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.210790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.210819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.210946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.210977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.211109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.211141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.211282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.211311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.211464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.211500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.211624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.211650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.211803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.211831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.211954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.211983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.212114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.212144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.212273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.212302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.212432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.212461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.212608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.212635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.212750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.212787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.212897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.212924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.213085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.213116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.213239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.213268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.213358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.213387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.213531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.213559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.213676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.213702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.213844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.213873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.213998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.214029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.214150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.214184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.214306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.214336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.739 [2024-10-13 01:46:39.214433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.739 [2024-10-13 01:46:39.214481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.739 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.214611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.214637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.214795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.214824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.214949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.214979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.215104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.215133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.215223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.215253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.215409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.215440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.215616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.215657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.215835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.215882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.215985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.216053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.216261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.216290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.216404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.216431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.216623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.216654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.216841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.216886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.217025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.217056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.217213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.217257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.217369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.217395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.217549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.217595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.217726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.217782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.217912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.217939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.218059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.218086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.218169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.218196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.218291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.218318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.218411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.218438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.218584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.218612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.218745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.218801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.218924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.218952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.219074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.219100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.219194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.219221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.219306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.219333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.219478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.219505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.219643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.219671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.219755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.219790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.219939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.219968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.220075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.740 [2024-10-13 01:46:39.220104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.740 qpair failed and we were unable to recover it. 00:35:53.740 [2024-10-13 01:46:39.220249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.220279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.220387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.220415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.220588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.220633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.220756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.220803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.220941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.221039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.221152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.221241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.221370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.221399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.221524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.221552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.221675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.221704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.221802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.221833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.221961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.221990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.222143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.222174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.222303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.222331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.222460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.222493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.222618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.222645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.222739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.222774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.222891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.222918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.223005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.223032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.223175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.223201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.223345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.223371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.223487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.223518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.223630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.223656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.223758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.223806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.223929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.223957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.224069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.224096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.224221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.224251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.224391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.224420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.224556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.224604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.224715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.224761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.224930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.224984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.225101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.225130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.225254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.225282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.225402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.225430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.741 [2024-10-13 01:46:39.225583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.741 [2024-10-13 01:46:39.225614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.741 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.225767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.225797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.225994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.226045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.226148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.226174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.226286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.226312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.226432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.226459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.226589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.226615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.226716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.226747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.226902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.226931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.227056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.227086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.227247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.227294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.227391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.227419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.227561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.227607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.227723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.227767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.227899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.227944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.228080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.228125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.228245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.228272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.228391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.228417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.228590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.228620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.228726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.228753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.228870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.228897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.229039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.229065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.229152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.229181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.229335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.229361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.229444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.229489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.229607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.229636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.229723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.229752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.229880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.229910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.230038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.230068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.230222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.230251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.230403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.230433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.230574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.230620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.230777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.230807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.230906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.230937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.231086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.231115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.231268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.231297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.231424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.231453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.231605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.231633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.231793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.231837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.231976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.742 [2024-10-13 01:46:39.232007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.742 qpair failed and we were unable to recover it. 00:35:53.742 [2024-10-13 01:46:39.232102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.232132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.232255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.232285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.232381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.232410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.232564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.232591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.232705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.232750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.232879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.232908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.233009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.233039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.233165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.233195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.233318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.233347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.233513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.233540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.233625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.233652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.233782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.233822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.233909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.233955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.234094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.234124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.234246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.234275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.234393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.234437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.234555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.234585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.234786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.234833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.234993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.235038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.235165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.235210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.235298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.235326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.235477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.235504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.235656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.235682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.235781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.235810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.235939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.235995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.236200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.236230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.236402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.236430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.236560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.236592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.236760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.236805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.236937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.237028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.237153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.237204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.237346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.237373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.237455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.237490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.237636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.237666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.237839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.237883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.238055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.238085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.238239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.238266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.743 [2024-10-13 01:46:39.238418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.743 [2024-10-13 01:46:39.238457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.743 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.238644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.238676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.238834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.238865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.238992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.239021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.239226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.239271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.239365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.239408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.239519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.239546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.239665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.239691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.239817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.239846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.239971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.240000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.240109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.240135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.240229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.240258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.240367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.240393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.240486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.240513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.240621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.240655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.240738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.240764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.240880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.240906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.241048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.241077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.241197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.241242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.241386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.241431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.241595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.241624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.241766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.241793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.241899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.241941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.242031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.242060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.242186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.242216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.242348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.242380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.242519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.242545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.242689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.242715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.242805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.242849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.242975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.243004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.243100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.243130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.243255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.243284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.243402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.243444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.243598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.243625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.243712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.243739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.243907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.243936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.244087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.244116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.744 qpair failed and we were unable to recover it. 00:35:53.744 [2024-10-13 01:46:39.244253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.744 [2024-10-13 01:46:39.244282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.244407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.244436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.244601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.244641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.244800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.244829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.244948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.244997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.245152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.245194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.245403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.245433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.245544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.245572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.245682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.245709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.245895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.245925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.246017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.246046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.246175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.246207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.246360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.246388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.246558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.246585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.246702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.246727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.246845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.246871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.246962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.246988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.247129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.247159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.247300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.247347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.247507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.247550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.247667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.247693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.247831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.247858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.247967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.248009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.248134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.248176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.248340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.248381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.248529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.248556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.248695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.248721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.248854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.248883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.249030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.249059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.249188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.249219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.249359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.249385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.249534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.249566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.249682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.249708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.249843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.249872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.250000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.250029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.250152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.250181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.250358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.250399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.250528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.250558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.745 [2024-10-13 01:46:39.250693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.745 [2024-10-13 01:46:39.250738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.745 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.250944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.250988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.251157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.251199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.251293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.251320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.251420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.251447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.251541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.251568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.251653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.251680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.251801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.251830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.251927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.251956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.252105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.252135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.252268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.252294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.252375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.252401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.252522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.252549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.252676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.252705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.252838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.252867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.252994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.253023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.253207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.253256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.253378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.253406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.253552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.253579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.253705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.253750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.253864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.253913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.254063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.254089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.254310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.254376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.254543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.254569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.254656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.254682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.254813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.254842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.254999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.255027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.255228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.255286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.255395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.255422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.255548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.255575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.255679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.255709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.255901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.255967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.256053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.256080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.256168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.256195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.256341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.256368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.256460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.256495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.256622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.256648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.256841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.256868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.256954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.256982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.257074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.257101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.257217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.746 [2024-10-13 01:46:39.257248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.746 qpair failed and we were unable to recover it. 00:35:53.746 [2024-10-13 01:46:39.257334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.747 [2024-10-13 01:46:39.257361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.747 qpair failed and we were unable to recover it. 00:35:53.747 [2024-10-13 01:46:39.257456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.747 [2024-10-13 01:46:39.257491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.747 qpair failed and we were unable to recover it. 00:35:53.747 [2024-10-13 01:46:39.257615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.747 [2024-10-13 01:46:39.257641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.747 qpair failed and we were unable to recover it. 00:35:53.747 [2024-10-13 01:46:39.257759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.747 [2024-10-13 01:46:39.257786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:53.747 qpair failed and we were unable to recover it. 00:35:53.747 [2024-10-13 01:46:39.257951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.747 [2024-10-13 01:46:39.257990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.747 qpair failed and we were unable to recover it. 00:35:53.747 [2024-10-13 01:46:39.258111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.747 [2024-10-13 01:46:39.258140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.747 qpair failed and we were unable to recover it. 00:35:53.747 [2024-10-13 01:46:39.258261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.747 [2024-10-13 01:46:39.258289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.747 qpair failed and we were unable to recover it. 00:35:53.747 [2024-10-13 01:46:39.258382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.747 [2024-10-13 01:46:39.258408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.747 qpair failed and we were unable to recover it. 00:35:53.747 [2024-10-13 01:46:39.258501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:53.747 [2024-10-13 01:46:39.258528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:53.747 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.258651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.258677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.258813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.258842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.258970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.258999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.259124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.259154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.259252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.259283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.259399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.259426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.259560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.259587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.259701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.259728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.259829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.259860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.260004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.260033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.260118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.260152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.260271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.260300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.260384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.023 [2024-10-13 01:46:39.260414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.023 qpair failed and we were unable to recover it. 00:35:54.023 [2024-10-13 01:46:39.260558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.260586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.260696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.260722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.260849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.260878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.261014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.261059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.261186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.261215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.261335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.261363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.261523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.261550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.261643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.261669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.261772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.261801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.261921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.261950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.262066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.262095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.262221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.262251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.262352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.262382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.262478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.262522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.262611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.262639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.262719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.262746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.262856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.262883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.262987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.263017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.263124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.263150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.263315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.263357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.263514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.263548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.263659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.263686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.263796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.263823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.263952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.263981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.264094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.264138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.264272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.264301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.264395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.264424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.264569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.264596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.264719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.264745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.264857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.264886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.264990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.265017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.265153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.265182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.265320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.265350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.265492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.265525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.265636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.265661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.265840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.265869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.266030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.266060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.266210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.266239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.266360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.266390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.266511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.266538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.266647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.266673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.266749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.266798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.266905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.266931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.267051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.267095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.267233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.267265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.267386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.267427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.267643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.267673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.267815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.267860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.268023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.268067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.268152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.268180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.268298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.268336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.268487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.268526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.268628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.268659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.268769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.268803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.268943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.268972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.269094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.269137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.269275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.269304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.269403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.269430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.269579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.269618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.269741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.269788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.269978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.270008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.270139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.270170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.270293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.270323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.270484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.270525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.270613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.270641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.270730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.270768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.270900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.270930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.271162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.271191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.024 [2024-10-13 01:46:39.271323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.024 [2024-10-13 01:46:39.271352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.024 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.271438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.271468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.271625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.271666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.271825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.271872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.271978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.272008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.272185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.272229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.272308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.272335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.272438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.272494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.272624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.272652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.272769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.272799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.272938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.272968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.273162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.273193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.273323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.273352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.273458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.273494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.273604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.273631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.273758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.273789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.273911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.273940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.274029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.274058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.274186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.274215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.274345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.274374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.274513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.274541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.274660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.274687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.274810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.274839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.274933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.274967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.275090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.275119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.275238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.275267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.275393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.275422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.275572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.275599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.275701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.275741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.275938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.275983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.276089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.276120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.276250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.276280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.276385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.276411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.276518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.276545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.276666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.276693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.276831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.276858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.276991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.277021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.277159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.277189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.277386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.277416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.277537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.277564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.277706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.277732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.277858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.277884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.278014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.278043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.278164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.278192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.278351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.278380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.278510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.278537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.278636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.278663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.278825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.278853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.279002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.279030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.279123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.279154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.279328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.279362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.279490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.279538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.279651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.279677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.279770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.279809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.279955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.280000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.280135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.280167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.280326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.280355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.280531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.280559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.280674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.280702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.280818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.280845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.280994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.281036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.281177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.281207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.281345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.281376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.281516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.281543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.281669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.281695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.281793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.281819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.281937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.281966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.282116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.282145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.282266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.282296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.282428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.282477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.282597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.282623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.282714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.282741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.282908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.282938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.283088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.283117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.025 qpair failed and we were unable to recover it. 00:35:54.025 [2024-10-13 01:46:39.283215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.025 [2024-10-13 01:46:39.283246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.283398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.283428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.283576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.283602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.283695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.283725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.283838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.283868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.283983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.284009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.284201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.284230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.284320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.284363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.284484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.284522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.284635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.284661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.284771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.284813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.284903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.284932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.285052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.285083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.285211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.285240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.285353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.285393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.285490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.285524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.285651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.285682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.285797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.285825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.285993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.286038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.286198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.286245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.286329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.286357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.286446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.286480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.286623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.286649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.286821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.286850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.287026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.287078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.287208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.287237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.287393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.287420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.287519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.287546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.287634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.287659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.287767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.287796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.287956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.288012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.288143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.288172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.288343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.288390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.288541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.288569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.288699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.288744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.288991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.289043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.289258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.289310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.289450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.289483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.289641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.289669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.289751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.289790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.289953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.289997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.290141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.290211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.290350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.290376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.290491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.290523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.290616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.290642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.290775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.290805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.290927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.290956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.291052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.291081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.291204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.291233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.291323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.291352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.291497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.291529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.291631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.291661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.291802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.291829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.291968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.292014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.292098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.292125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.292241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.292268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.292381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.292408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.292540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.292585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.292706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.292735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.292823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.292849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.292937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.292965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.293061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.293088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.293174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.293200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.293311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.293337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.293450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.293486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.026 qpair failed and we were unable to recover it. 00:35:54.026 [2024-10-13 01:46:39.293580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.026 [2024-10-13 01:46:39.293607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.293741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.293770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.293874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.293905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.294038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.294069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.294246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.294277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.294409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.294436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.294573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.294602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.294684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.294711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.294827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.294873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.295012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.295056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.295179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.295207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.295304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.295332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.295482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.295521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.295634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.295661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.295793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.295838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.295929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.295957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.296071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.296098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.296238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.296265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.296379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.296407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.296592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.296632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.296754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.296783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.296897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.296924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.297090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.297143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.297244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.297273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.297401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.297430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.297606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.297651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.297792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.297819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.297958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.297985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.298098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.298125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.298265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.298292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.298422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.298461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.298588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.298615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.298731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.298774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.298870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.298897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.299013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.299039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.299153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.299181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.299269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.299298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.299456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.299511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.299665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.299696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.299832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.299863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.299996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.300027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.300187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.300217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.300348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.300379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.300493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.300531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.300695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.300740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.300826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.300853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.300993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.301047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.301211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.301257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.301401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.301429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.301578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.301619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.301716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.301759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.301958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.301987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.302100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.302165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.302293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.302335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.302424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.302451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.302607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.302633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.302727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.302753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.302911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.302939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.303050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.303076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.303183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.303218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.303323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.303351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.303440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.303467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.303569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.303595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.303726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.303755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.303877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.303905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.304053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.304082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.304208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.304237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.304371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.304416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.304554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.304586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.304735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.304805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.304947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.304993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.305132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.305177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.305310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.305337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.305489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.305517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.305663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.305707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.305799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.027 [2024-10-13 01:46:39.305826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.027 qpair failed and we were unable to recover it. 00:35:54.027 [2024-10-13 01:46:39.305963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.306007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.306095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.306122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.306245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.306272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.306386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.306412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.306546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.306593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.306719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.306749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.306879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.306906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.307049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.307076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.307192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.307219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.307324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.307350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.307475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.307503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.307601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.307631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.307769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.307810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.307937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.307965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.308097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.308124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.308266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.308293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.308409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.308439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.308597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.308626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.308750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.308779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.308905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.308934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.309041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.309070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.309205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.309249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.309382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.309410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.309579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.309624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.309736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.309783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.309888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.309917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.310079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.310106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.310217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.310243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.310354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.310381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.310454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.310485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.310633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.310663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.310790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.310820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.310978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.311009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.311143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.311169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.311288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.311315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.311460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.311494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.311606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.311633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.311755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.311782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.311925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.311952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.312080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.312120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.312251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.312279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.312400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.312428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.312562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.312591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.312732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.312775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.312893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.312921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.313054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.313098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.313218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.313243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.313401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.313441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.313577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.313607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.313773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.313803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.313974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.314009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.314203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.314232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.314340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.314370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.314461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.314499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.314629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.314655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.314763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.314790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.314889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.314919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.315071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.315100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.315197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.315227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.315335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.315362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.315484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.315511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.315651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.315677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.315857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.315900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.316000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.316043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.316200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.316230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.316392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.316423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.316552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.316593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.316743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.316801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.316962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.316993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.028 qpair failed and we were unable to recover it. 00:35:54.028 [2024-10-13 01:46:39.317092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.028 [2024-10-13 01:46:39.317120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.317257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.317293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.317420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.317451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.317603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.317643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.317748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.317778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.317903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.317932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.318079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.318108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.318267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.318315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.318432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.318466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.318599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.318627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.318734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.318761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.318869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.318901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.319140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.319196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.319294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.319325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.319480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.319509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.319608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.319636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.319753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.319781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.319868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.319912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.320081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.320134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.320362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.320413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.320553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.320581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.320683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.320713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.320896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.320949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.321130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.321183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.321312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.321342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.321446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.321487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.321633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.321660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.321774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.321804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.321909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.321936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.322071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.322117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.322225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.322253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.322372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.322400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.322496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.322523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.322613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.322639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.322775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.322802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.322965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.322995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.323186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.323215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.323350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.323376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.323491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.323518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.323631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.323658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.323874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.323928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.324087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.324136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.324319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.324377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.324521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.324549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.324677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.324707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.324858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.324904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.325065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.325121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.325255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.325292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.325414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.325444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.325607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.325638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.325767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.325813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.325895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.325922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.326152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.326202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.326342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.326369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.326504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.326564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.326722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.326753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.326875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.326904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.327028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.327057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.327181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.327210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.327327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.327356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.327475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.327504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.327603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.327634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.327771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.327802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.327931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.327976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.328077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.328118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.328252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.328292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.328455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.328493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.328582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.328607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.328721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.328747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.328827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.328851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.328938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.328964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.329055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.329081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.329221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.329247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.329337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.329365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.329507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.329535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.329662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.329697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.329845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.329870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.330017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.029 [2024-10-13 01:46:39.330042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.029 qpair failed and we were unable to recover it. 00:35:54.029 [2024-10-13 01:46:39.330132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.330157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.330312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.330339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.330456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.330487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.330600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.330625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.330713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.330737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.330844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.330868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.330966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.330995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.331115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.331144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.331292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.331334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.331493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.331519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.331656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.331697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.331961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.332010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.332164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.332216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.332315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.332340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.332486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.332513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.332641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.332667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.332775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.332804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.332940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.332981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.333152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.333205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.333296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.333336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.333419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.333443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.333550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.333575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.333669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.333693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.333790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.333814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.333942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.333975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.334089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.334114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.334224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.334251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.334370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.334412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.334525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.334549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.334634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.334658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.334747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.334790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.334882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.334926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.335047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.335075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.335202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.335229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.335375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.335413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.335547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.335578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.335719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.335764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.335855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.335881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.336017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.336060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.336167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.336211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.336331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.336357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.336497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.336535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.336665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.336692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.336782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.336807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.336951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.336985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.337065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.337090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.337199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.337258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.337378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.337420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.337588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.337616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.337765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.337795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.337956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.337986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.338190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.338226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.338390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.338417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.338537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.338564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.338670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.338696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.338830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.338859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.338984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.339014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.339124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.339153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.339281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.339310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.339438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.339477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.339616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.339646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.339744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.339771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.339865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.339894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.339980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.340006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.340148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.340198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.340343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.340397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.340525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.340554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.340695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.340739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.340894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.340927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.341049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.341079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.341272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.341303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.341412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.030 [2024-10-13 01:46:39.341444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.030 qpair failed and we were unable to recover it. 00:35:54.030 [2024-10-13 01:46:39.341626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.341666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.341822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.341865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.342024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.342077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.342284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.342326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.342521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.342551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.346603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.346644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.346761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.346790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.346945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.346993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.347111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.347156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.347357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.347413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.347520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.347546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.347672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.347718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.347865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.347911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.348000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.348027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.348159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.348200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.348326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.348353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.348440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.348465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.348604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.348634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.348736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.348777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.348904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.348940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.349040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.349069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.349173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.349202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.349375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.349421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.349567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.349596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.349728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.349774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.349903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.349950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.350124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.350172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.350269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.350310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.350459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.350495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.350591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.350617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.350748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.350785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.350877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.350906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.351043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.351131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.351306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.351366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.351509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.351550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.351647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.351675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.351819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.351846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.352036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.352091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.352321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.352367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.352483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.352527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.352621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.352665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.352788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.352818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.352937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.352966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.353098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.353127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.353251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.353280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.353385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.353420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.353585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.353619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.353752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.353797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.353938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.353984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.354079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.354104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.354224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.354252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.354351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.354380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.354466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.354501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.354621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.354648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.354773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.354800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.354914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.354942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.355063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.355089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.355190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.355219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.355354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.355381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.355528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.355555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.355642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.355669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.355787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.355827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.355923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.355967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.356098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.356128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.356238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.356268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.356390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.356419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.356546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.356573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.356664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.356691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.356801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.356827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.356935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.356967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.357062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.357090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.357181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.357211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.357319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.357347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.357494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.357526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.357622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.357649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.357732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.357787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.357914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.031 [2024-10-13 01:46:39.357943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.031 qpair failed and we were unable to recover it. 00:35:54.031 [2024-10-13 01:46:39.358067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.358097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.358221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.358250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.358353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.358384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.358499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.358566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.358725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.358784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.358921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.359009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.359119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.359149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.359234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.359261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.359379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.359408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.359528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.359554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.359668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.359694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.359821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.359851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.360025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.360054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.360187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.360231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.360357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.360386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.360532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.360559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.360675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.360704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.360819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.360847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.360970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.360999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.361121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.361151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.361296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.361328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.361489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.361529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.361633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.361662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.361815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.361846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.361967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.361996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.362148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.362177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.362315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.362344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.362484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.362521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.362632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.362658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.362762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.362804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.362902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.362930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.363085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.363114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.363232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.363281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.363442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.363490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.363638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.363669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.363796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.363826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.363965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.363991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.364250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.364305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.364404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.364435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.364680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.364711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.364853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.364906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.365053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.365081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.365168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.365194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.365338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.365386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.365527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.365556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.365674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.365701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.365867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.365910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.366037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.366066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.366212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.366282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.366408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.366435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.366545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.366572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.366684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.366728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.366827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.366857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.366995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.367024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.367151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.367181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.367308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.367340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.367486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.367524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.367615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.367642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.367728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.367754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.367841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.367866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.367958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.367984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.368138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.368201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.368298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.368341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.368433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.368464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.368593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.368620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.368739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.368777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.368886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.368928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.369046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.369075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.369197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.369227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.369352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.369383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.369534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.369565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.369673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.369719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.369841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.369869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.369961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.369988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.370138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.370169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.370297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.370324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.370412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.370439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.032 qpair failed and we were unable to recover it. 00:35:54.032 [2024-10-13 01:46:39.370536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.032 [2024-10-13 01:46:39.370564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.370651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.370676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.370812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.370841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.370953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.370981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.371075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.371104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.371195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.371229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.371385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.371414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.371539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.371566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.371654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.371679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.371762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.371790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.371905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.371932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.372065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.372097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.372253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.372283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.372381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.372415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.372570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.372600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.372736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.372782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.372916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.372964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.373127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.373172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.373289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.373322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.373486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.373531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.373631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.373659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.373823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.373869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.373956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.373984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.374147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.374177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.374311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.374338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.374454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.374487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.374597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.374627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.374726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.374757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.374854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.374884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.375009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.375038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.375153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.375182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.375307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.375336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.375505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.375532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.375618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.375643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.375746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.375786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.375934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.375981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.376183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.376229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.376377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.376404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.376506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.376541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.376741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.376790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.376881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.376921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.377063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.377108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.377199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.377226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.377351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.377379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.377504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.377532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.377623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.377648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.377782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.377812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.377938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.377968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.378097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.378126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.378224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.378257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.378398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.378431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.378550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.378577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.378721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.378750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.378877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.378908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.379014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.379044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.379166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.379213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.379357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.379393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.379491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.379517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.379657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.379683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.379787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.379816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.379938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.379967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.380073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.380103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.380233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.380265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.380402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.380429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.380555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.380584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.380721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.380766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.380901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.380951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.381086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.381118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.381244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.381274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.381385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.381413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.381531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.381559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.033 qpair failed and we were unable to recover it. 00:35:54.033 [2024-10-13 01:46:39.381706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.033 [2024-10-13 01:46:39.381732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.381875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.381901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.382064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.382093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.382215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.382245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.382366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.382409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.382505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.382534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.382652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.382679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.382788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.382818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.383031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.383099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.383213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.383279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.383410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.383440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.383579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.383606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.383723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.383752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.383833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.383857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.383994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.384023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.384133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.384163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.384278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.384307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.384421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.384449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.384541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.384568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.384648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.384673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.384803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.384833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.384956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.384985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.385117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.385146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.385293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.385326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.385515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.385544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.385632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.385664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.385793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.385822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.385952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.385981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.386083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.386112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.386238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.386267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.386404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.386431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.386550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.386576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.386667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.386694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.386802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.386830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.386957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.386987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.387110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.387140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.387261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.387289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.387416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.387445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.387566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.387595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.387684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.387709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.387825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.387851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.387942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.387968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.388064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.388094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.388190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.388221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.388348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.388376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.388484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.388509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.388588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.388615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.388703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.388746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.388874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.388903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.389028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.389057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.389193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.389225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.389378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.389414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.389538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.389567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.389673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.389718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.389858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.389910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.390055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.390102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.390221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.390250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.390400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.390428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.390568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.390598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.390727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.390756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.390879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.390908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.391063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.391092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.391242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.391270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.391423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.391457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.391577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.391604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.391731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.391771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.391903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.391946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.392110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.392140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.392261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.392293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.392400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.392428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.392562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.392594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.392734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.392764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.392979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.393036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.393134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.393163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.393271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.393306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.393434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.393462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.393575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.393606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.393772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.034 [2024-10-13 01:46:39.393821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.034 qpair failed and we were unable to recover it. 00:35:54.034 [2024-10-13 01:46:39.393986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.394030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.394169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.394221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.394351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.394379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.394500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.394527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.394643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.394669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.394755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.394781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.394899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.394924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.395052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.395080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.395206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.395236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.395389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.395418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.395557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.395583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.395716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.395746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.395870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.395903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.396011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.396041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.396165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.396193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.396315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.396357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.396450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.396483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.396599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.396625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.396722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.396747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.396905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.396933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.397051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.397081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.397179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.397210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.397306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.397335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.397462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.397496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.397587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.397614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.397722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.397748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.397894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.397921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.398064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.398096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.398190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.398219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.398334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.398362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.398450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.398486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.398649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.398675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.398769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.398795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.398884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.398910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.398996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.399022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.399117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.399145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.399275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.399308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.399487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.399514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.399595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.399620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.399722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.399749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.399863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.399889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.400003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.400031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.400129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.400161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.400294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.400322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.400455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.400491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.400625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.400652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.400743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.400769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.400879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.400905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.400983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.401008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.401126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.401152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.401282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.401311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.401412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.401444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.401589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.401620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.401740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.401768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.401856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.401896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.401982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.402012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.402168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.402197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.402292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.402322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.402479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.402508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.402626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.402655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.402746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.402773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.402927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.402955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.403123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.403183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.403290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.403320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.403419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.403450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.403634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.403664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.403751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.403794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.403923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.403954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.404074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.404102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.404227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.404256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.404388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.404421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.035 [2024-10-13 01:46:39.404568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.035 [2024-10-13 01:46:39.404598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.035 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.404709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.404735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.404827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.404870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.404995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.405026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.405107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.405134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.405256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.405286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.405421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.405447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.405540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.405568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.405684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.405716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.405889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.405918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.406027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.406056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.406157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.406186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.406318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.406345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.406458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.406491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.406609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.406634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.406724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.406747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.406860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.406886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.406966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.406990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.407124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.407154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.407253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.407284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.407416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.407445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.407585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.407611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.407735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.407764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.407873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.407903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.408030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.408060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.408192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.408221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.408310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.408339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.408487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.408527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.408661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.408691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.408796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.408826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.408967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.409012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.409121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.409152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.409288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.409315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.409434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.409462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.409589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.409615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.409693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.409724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.409820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.409850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.409967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.409995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.410103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.410129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.410226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.410256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.410342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.410369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.410530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.410563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.410788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.410833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.410940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.410988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.411116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.411145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.411265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.411292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.411408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.411437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.411555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.411583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.411696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.411727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.411862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.411890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.412033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.412060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.412191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.412219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.412327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.412352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.412441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.412467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.412606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.412632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.412752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.412778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.412870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.412895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.412988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.413015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.413116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.413145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.413265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.413294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.413451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.413484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.413599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.413624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.413741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.413768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.413900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.413928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.414052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.414080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.414168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.414196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.414289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.414318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.414440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.414468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.414640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.414666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.414755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.414781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.414889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.414915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.415016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.415047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.415148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.415176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.415291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.415337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.415449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.415487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.415598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.415651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.415756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.415785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.415940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.036 [2024-10-13 01:46:39.415969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.036 qpair failed and we were unable to recover it. 00:35:54.036 [2024-10-13 01:46:39.416125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.416154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.416276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.416320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.416441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.416467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.416594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.416620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.416703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.416747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.416839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.416867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.416989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.417019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.417120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.417151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.417273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.417303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.417458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.417491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.417583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.417610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.417755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.417785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.417942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.417968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.418078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.418124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.418278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.418307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.418435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.418460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.418589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.418615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.418737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.418763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.418932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.418960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.419093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.419125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.419245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.419274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.419421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.419452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.419585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.419612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.419725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.419769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.419859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.419896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.420023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.420053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.420181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.420210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.420338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.420367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.420503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.420529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.420618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.420644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.420754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.420779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.420888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.420916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.421033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.421062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.421186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.421214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.421351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.421382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.421481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.421512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.421614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.421643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.421774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.421805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.421951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.421979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.422062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.422090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.422219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.422281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.422379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.422407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.422529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.422557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.422674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.422703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.422796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.422822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.422906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.422933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.423023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.423052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.423144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.423170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.423245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.423269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.423383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.423410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.423500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.423534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.423648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.423674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.423796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.423825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.423951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.423979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.424101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.424128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.424291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.424321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.424413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.424456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.424564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.424605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.424741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.424784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.424916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.424945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.425037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.425067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.425248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.425297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.425397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.425427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.425601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.425648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.425786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.425837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.425954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.425994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.426103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.426133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.426268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.426299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.426427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.426456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.426600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.426632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.037 [2024-10-13 01:46:39.426822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.037 [2024-10-13 01:46:39.426870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.037 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.426959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.426987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.427075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.427104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.427232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.427260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.427351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.427377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.427544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.427573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.427694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.427722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.427823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.427851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.427949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.427978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.428113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.428140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.428256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.428282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.428375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.428402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.428549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.428596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.428726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.428757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.428890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.428921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.429052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.429079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.429221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.429250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.429359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.429386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.429502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.429531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.429675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.429702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.429839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.429865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.429989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.430018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.430130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.430157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.430299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.430327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.430421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.430448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.430557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.430597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.430729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.430778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.430898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.430926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.431012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.431057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.431161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.431188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.431315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.431341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.431430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.431456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.431598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.431624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.431740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.431791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.431926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.431969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.432098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.432130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.432263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.432295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.432409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.432434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.432546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.432586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.432731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.432770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.432915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.432966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.433070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.433100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.433210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.433238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.433379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.433405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.433499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.433531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.433662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.433691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.433863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.433897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.434089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.434138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.434246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.434276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.434432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.434485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.434634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.434673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.434778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.434805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.434886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.434910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.434999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.435024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.435135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.435200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.435332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.435371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.435463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.435498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.435613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.435643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.435800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.435845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.435951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.435981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.436139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.436171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.436261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.436294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.436409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.436436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.436563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.436593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.436689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.436719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.436838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.436890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.436992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.437023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.437164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.437196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.437286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.437331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.437424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.437451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.437560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.437604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.437704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.437733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.437832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.437861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.437996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.438025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.438156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.438188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.038 [2024-10-13 01:46:39.438282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.038 [2024-10-13 01:46:39.438311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.038 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.438446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.438481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.438597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.438641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.438773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.438816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.438977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.439008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.439131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.439160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.439260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.439289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.439459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.439495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.439588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.439614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.439729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.439756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.439920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.439965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.440100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.440149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.440271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.440301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.440430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.440464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.440579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.440606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.440724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.440751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.440863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.440889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.440986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.441015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.441131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.441160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.441251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.441281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.441451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.441491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.441643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.441671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.441796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.441846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.441988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.442033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.442211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.442239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.442324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.442359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.442481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.442509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.442600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.442629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.442772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.442798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.442888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.442915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.443029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.443056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.443137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.443162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.443274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.443304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.443395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.443425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.443573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.443605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.443715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.443742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.443830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.443877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.443980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.444007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.444155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.444182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.444298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.444325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.444448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.444490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.444581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.444607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.444724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.444750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.444839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.444865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.444957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.444983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.445056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.445080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.445221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.445251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.445393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.445418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.445521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.445561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.445669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.445709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.445846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.445877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.445996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.446025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.446141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.446191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.446315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.446343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.446480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.446514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.446623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.446650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.446737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.446764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.446848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.446895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.447007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.447044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.447158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.447187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.447309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.447340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.447437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.447466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.447612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.447637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.447722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.447746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.447842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.447867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.448039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.448091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.448196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.448240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.448360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.448394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.448513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.448541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.448685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.448714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.448805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.448834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.448928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.448958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.449076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.449106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.449197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.449227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.449332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.449363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.039 qpair failed and we were unable to recover it. 00:35:54.039 [2024-10-13 01:46:39.449535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.039 [2024-10-13 01:46:39.449565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.449696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.449743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.449870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.449898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.450049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.450081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.450227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.450255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.450370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.450400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.450548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.450576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.450664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.450691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.450831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.450860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.450954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.450983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.451120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.451171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.451287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.451316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.451480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.451508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.451599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.451623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.451735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.451761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.451920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.451969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.452136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.452188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.452316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.452345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.452482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.452509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.452631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.452658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.452749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.452795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.452920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.452949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.453055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.453085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.453179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.453209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.453326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.453355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.453498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.453526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.453663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.453710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.453876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.453906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.454058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.454104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.454196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.454225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.454351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.454378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.454478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.454505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.454607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.454636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.454765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.454793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.454917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.454945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.455050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.455095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.455223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.455269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.455384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.455411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.455528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.455555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.455646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.455690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.455787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.455816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.455974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.456007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.456108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.456137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.456241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.456268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.456392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.456418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.456503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.456528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.456614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.456641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.456744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.456774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.456906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.456950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.457071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.457100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.457204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.457230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.457375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.457406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.457526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.457554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.457667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.457696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.457783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.457813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.457946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.457976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.458103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.458132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.458265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.458311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.458400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.458429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.458568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.458595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.458714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.458744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.458874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.458905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.459039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.459068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.459171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.459202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.459345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.459372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.459489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.459519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.459609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.459636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.459730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.459759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.459926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.459971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.460107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.460137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.460238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.460270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.460396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.460425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.460546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.460573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.460668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.460715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.040 [2024-10-13 01:46:39.460800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.040 [2024-10-13 01:46:39.460828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.040 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.460935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.460964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.461087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.461117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.461256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.461300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.461445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.461480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.461601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.461632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.461759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.461805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.461940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.461984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.462098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.462127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.462250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.462278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.462361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.462387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.462527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.462557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.462643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.462675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.462793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.462824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.462977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.463004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.463122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.463149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.463250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.463280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.463398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.463426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.463550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.463579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.463695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.463724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.463843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.463871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.463982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.464012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.464139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.464171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.464322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.464351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.464534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.464574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.464678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.464709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.464871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.464901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.464988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.465018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.465144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.465192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.465343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.465372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.465500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.465543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.465671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.465700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.465809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.465835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.465978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.466007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.466127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.466156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.466243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.466272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.466367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.466396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.466528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.466555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.466648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.466675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.466822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.466853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.466986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.467016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.467117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.467145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.467240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.467269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.467409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.467440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.467568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.467596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.467737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.467784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.467912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.467940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.468053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.468081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.468182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.468211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.468327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.468353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.468498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.468525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.468637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.468664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.468756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.468782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.468900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.468927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.469065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.469112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.469203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.469230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.469373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.469401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.469549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.469597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.469682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.469714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.469842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.469869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.469957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.469984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.470099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.470139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.470235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.470264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.470346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.470372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.470452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.470485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.470645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.470674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.470809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.470840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.470971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.471000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.471097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.471126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.471287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.471316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.471455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.471492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.471664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.471714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.471912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.471959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.472099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.472143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.472241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.472267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.472390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.472418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.472529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.472556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.041 qpair failed and we were unable to recover it. 00:35:54.041 [2024-10-13 01:46:39.472675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.041 [2024-10-13 01:46:39.472702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.472840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.472870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.472985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.473019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.473174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.473204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.473321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.473350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.473444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.473479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.473583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.473617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.473747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.473778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.473903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.473949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.474084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.474113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.474213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.474240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.474340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.474380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.474501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.474531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.474672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.474700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.474788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.474816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.474907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.474934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.475029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.475056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.475201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.475229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.475342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.475371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.475489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.475517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.475612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.475638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.475771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.475801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.475982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.476011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.476137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.476166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.476268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.476298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.476439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.476465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.476569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.476598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.476715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.476759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.476937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.476996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.477141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.477198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.477350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.477379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.477466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.477501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.477614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.477659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.477797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.477842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.477969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.478007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.478156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.478194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.478307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.478334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.478421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.478447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.478573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.478612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.478779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.478813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.478937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.478965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.479067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.479096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.479229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.479258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.479398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.479425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.479549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.479579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.479658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.479682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.479796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.479840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.479931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.479960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.480046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.480074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.480230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.480259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.480356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.480385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.480528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.480553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.480644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.480697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.480792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.480820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.480916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.480944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.481030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.481060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.481191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.481227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.481383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.481415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.481557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.481585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.481686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.481725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.481866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.481897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.482027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.482057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.482189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.482218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.482343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.482372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.482529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.482556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.482706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.482732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.482876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.482903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.483000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.483029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.483165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.483209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.483341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.483366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.483485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.483513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.042 qpair failed and we were unable to recover it. 00:35:54.042 [2024-10-13 01:46:39.483656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.042 [2024-10-13 01:46:39.483683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.483833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.483862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.484045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.484075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.484166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.484197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.484322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.484351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.484484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.484511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.484620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.484645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.484727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.484756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.484882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.484910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.485016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.485048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.485147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.485176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.485306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.485335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.485438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.485468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.485640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.485667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.485774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.485800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.485910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.485940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.486041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.486069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.486197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.486226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.486313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.486354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.486492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.486518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.486660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.486685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.486795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.486820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.486930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.486957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.487076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.487104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.487193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.487227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.487331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.487366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.487482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.487510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.487599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.487627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.487730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.487759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.487886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.487915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.488017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.488047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.488147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.488176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.488349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.488375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.488494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.488522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.488609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.488635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.488766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.488796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.488917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.488946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.489041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.489070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.489274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.489330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.489486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.489533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.489677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.489703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.489847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.489873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.490029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.490072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.490212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.490240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.490369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.490399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.490514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.490540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.490663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.490689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.490811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.490838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.490951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.490980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.491066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.491095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.491214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.491244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.491392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.491432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.491566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.491605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.491715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.491748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.491852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.491882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.491989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.492018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.492118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.492161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.492289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.492318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.492412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.492442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.492598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.492628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.492765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.492815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.492908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.492936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.493031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.493059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.493208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.493238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.493388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.493415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.493552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.493588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.493713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.493739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.493825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.493851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.493964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.493999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.494112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.494136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.494226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.494252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.494337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.494364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.494480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.494507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.494658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.494684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.494798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.494832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.494937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.494966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.495067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.495096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.043 [2024-10-13 01:46:39.495181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.043 [2024-10-13 01:46:39.495210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.043 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.495335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.495364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.495456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.495498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.495632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.495659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.495789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.495818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.495953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.495982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.496105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.496135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.496293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.496322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.496411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.496443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.496596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.496627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.496776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.496822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.496929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.496959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.497100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.497149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.497269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.497296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.497438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.497481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.497604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.497633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.497722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.497750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.497895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.497924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.498013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.498041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.498138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.498167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.498293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.498321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.498438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.498468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.498592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.498618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.498758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.498784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.498901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.498930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.499030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.499060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.499187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.499215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.499351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.499377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.499468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.499500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.499614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.499647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.499791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.499819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.499954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.499983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.500106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.500136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.500240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.500269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.500398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.500427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.500570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.500597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.500717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.500743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.500861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.500887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.500995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.501023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.501118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.501148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.501271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.501301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.501426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.501457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.501570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.501596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.501717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.501743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.501829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.501855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.501987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.502016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.502121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.502150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.502301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.502329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.502461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.502497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.502611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.502638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.502768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.502798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.502911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.502940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.503092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.503121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.503247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.503275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.503394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.503423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.503532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.503558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.503704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.503735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.503845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.503872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.503962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.503989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.504083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.504112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.504264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.504294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.504402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.504431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.504592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.504619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.504726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.504755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.504865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.504891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.504973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.505000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.505126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.505155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.505274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.505305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.505437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.505465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.505592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.505619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.505762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.505791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.505947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.505976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.506095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.044 [2024-10-13 01:46:39.506124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.044 qpair failed and we were unable to recover it. 00:35:54.044 [2024-10-13 01:46:39.506225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.506254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.506346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.506375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.506521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.506561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.506712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.506741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.506879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.506924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.507062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.507107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.507288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.507332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.507438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.507477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.507612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.507639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.507774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.507803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.507930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.507965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.508117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.508146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.508241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.508271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.508398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.508427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.508586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.508626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.508775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.508826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.508961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.509007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.509140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.509170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.509333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.509361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.509489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.509518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.509660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.509688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.509783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.509811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.509925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.509959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.510109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.510139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.510237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.510263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.510339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.510363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.510449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.510482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.510592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.510618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.510706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.510732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.510822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.510850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.510962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.510988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.511081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.511108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.511220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.511249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.511390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.511416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.511512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.511539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.511662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.511688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.511784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.511823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.511910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.511943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.512079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.512127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.512269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.512316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.512448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.512501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.512640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.512690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.512804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.512835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.512959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.512987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.513109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.513139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.513258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.513287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.513407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.513438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.513581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.513608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.513760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.513788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.513917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.513947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.514104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.514133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.514255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.514303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.514428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.514459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.514579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.514607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.514718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.514748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.514862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.514896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.515027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.515071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.515191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.515225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.515350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.515377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.515532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.515561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.515677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.515704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.515791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.515818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.515906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.515932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.516047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.516076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.516222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.516252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.516343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.516372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.516462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.516519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.516609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.516638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.516756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.516785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.516888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.516931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.517057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.517086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.517194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.517221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.517378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.517407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.517533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.517579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.517721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.517774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.517939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.517969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.518112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.518142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.518265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.045 [2024-10-13 01:46:39.518304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.045 qpair failed and we were unable to recover it. 00:35:54.045 [2024-10-13 01:46:39.518402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.518429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.518536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.518565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.518662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.518693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.518777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.518804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.518918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.518949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.519075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.519105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.519241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.519274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.519371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.519400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.519545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.519571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.519655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.519680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.519792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.519835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.519987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.520016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.520158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.520212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.520336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.520370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.520483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.520511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.520599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.520643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.520738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.520766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.520916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.520945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.521038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.521066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.521188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.521223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.521370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.521404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.521556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.521585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.521677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.521720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.521852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.521881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.522005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.522034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.522156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.522185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.522302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.522330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.522455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.522499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.522595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.522623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.522706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.522733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.522860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.522907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.523029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.523075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.523196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.523224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.523342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.523370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.523460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.523493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.523583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.523610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.523748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.523773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.523929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.523956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.524079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.524108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.524226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.524255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.524382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.524425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.524553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.524580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.524694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.524720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.524805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.524831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.524942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.524972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.525111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.525143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.525299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.525332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.525446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.525486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.525574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.525600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.525716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.525743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.525853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.525883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.526034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.526063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.526169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.526199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.526305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.526353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.526464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.526501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.526592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.526619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.526762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.526788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.526906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.526932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.527050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.527079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.527191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.527220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.527315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.527344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.527466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.527518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.527633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.527659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.527787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.527814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.527901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.527927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.528058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.528105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.528230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.528273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.528389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.528420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.528549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.528577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.528686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.528712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.528826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.528854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.046 [2024-10-13 01:46:39.528987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.046 [2024-10-13 01:46:39.529017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.046 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.529211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.529241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.529396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.529425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.529574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.529604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.529733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.529762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.529888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.529918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.530012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.530041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.530249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.530279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.530376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.530420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.530582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.530613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.530736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.530765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.530915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.530944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.531045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.531076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.531193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.531233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.531440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.531468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.531609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.531637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.531743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.531773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.531981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.532028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.532125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.532152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.532297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.532330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.532418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.532446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.532543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.532572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.532706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.532760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.532867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.532895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.533021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.533079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.533270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.533302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.533427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.533457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.533597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.533657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.533791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.533840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.534001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.534047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.534198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.534226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.534308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.534335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.534480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.534507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.534610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.534639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.534725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.534754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.534870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.534899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.534998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.535033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.535115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.535144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.535265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.535295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.535439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.535485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.535633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.047 [2024-10-13 01:46:39.535665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.047 qpair failed and we were unable to recover it. 00:35:54.047 [2024-10-13 01:46:39.535789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.535818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.535970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.535999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.536122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.536152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.536275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.536305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.536479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.536508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.536624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.536650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.536735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.536763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.536891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.536920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.537011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.537041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.537172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.537202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.537346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.537389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.537476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.537503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.537616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.537642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.537785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.537811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.538027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.538056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.538154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.538184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.538309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.538340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.538500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.538541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.538669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.538697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.538814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.538843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.538957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.538984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.539105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.539132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.539280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.539308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.539427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.539456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.539579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.539605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.539724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.539751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.539867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.539894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.539987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.540013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.540120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.540149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.540272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.540301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.540429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.540459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.540611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.540641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.540774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.540805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.540935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.540964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.541052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.541081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.541166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.541201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.541327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.541370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.541489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.541516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.541657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.541683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.541798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.541825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.541965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.541992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.542093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.542152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.542285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.542311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.542429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.542455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.542569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.542599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.048 qpair failed and we were unable to recover it. 00:35:54.048 [2024-10-13 01:46:39.542726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.048 [2024-10-13 01:46:39.542755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.542851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.542880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.543003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.543033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.543124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.543153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.543254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.543298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.543439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.543467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.543614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.543641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.543804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.543834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.543951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.543980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.544072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.544102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.544228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.544255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.544409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.544435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.544523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.544550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.544671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.544697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.544838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.544864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.544975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.545018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.545143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.545172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.545316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.545350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.545493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.545520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.545660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.545688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.545809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.545836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.545926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.545970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.546091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.546121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.546204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.546233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.546360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.546389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.546534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.546561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.546689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.546719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.546865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.546895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.547020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.547049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.547178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.547208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.547305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.547334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.547503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.547546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.547626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.547670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.547806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.547835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.547957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.547986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.548078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.548108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.548261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.548291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.548412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.548456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.548585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.548611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.548736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.548779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.548961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.548990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.549094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.549138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.549277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.549306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.549430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.549461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.049 [2024-10-13 01:46:39.549604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.049 [2024-10-13 01:46:39.549635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.049 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.549777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.549803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.549927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.549955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.550070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.550099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.550229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.550258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.550394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.550422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.550564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.550605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.550729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.550759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.550891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.550936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.551075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.551118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.551220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.551248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.551398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.551427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.551521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.551549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.551669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.551701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.551863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.551892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.552015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.552044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.552174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.552203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.552341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.552370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.552458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.552496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.552591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.552619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.552755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.552799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.553027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.553078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.553206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.553251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.553395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.553422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.553501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.553526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.553642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.553669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.553783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.553812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.553932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.553968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.554117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.554146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.554296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.554325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.554435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.554490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.554604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.554641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.554810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.554856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.554999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.555044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.555184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.555229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.555370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.555398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.555530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.555577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.555749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.050 [2024-10-13 01:46:39.555779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.050 qpair failed and we were unable to recover it. 00:35:54.050 [2024-10-13 01:46:39.555979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.556007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.556123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.556150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.556273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.556300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.556388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.556415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.556561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.556602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.556698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.556726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.556812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.556838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.556981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.557011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.557129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.557158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.557319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.557351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.557515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.557543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.557653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.557680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.557771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.557796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.557906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.557933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.558049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.558077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.558193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.558221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.558348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.558377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.558521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.558549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.558666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.558692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.558808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.558835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.558956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.558983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.559103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.559134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.559299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.559325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.559438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.559464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.559562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.559590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.559704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.559736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.559915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.559960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.560157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.560200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.560352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.560379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.560490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.560522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.560636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.560662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.560803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.560830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.560948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.560975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.561089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.561116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.561233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.561271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.561361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.561386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.561530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.561558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.561680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.561710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.561802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.561831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.561948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.561977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.562106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.562135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.562269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.562296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.562413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.562440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.562588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.562636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.562739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.562769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.562933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.051 [2024-10-13 01:46:39.562977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.051 qpair failed and we were unable to recover it. 00:35:54.051 [2024-10-13 01:46:39.563138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.563182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.563290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.563317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.563461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.563494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.563658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.563704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.563829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.563859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.564013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.564043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.564252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.564279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.564422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.564449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.564550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.564579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.564727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.564753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.564836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.564869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.564958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.564984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.565084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.565123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.565246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.565275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.565384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.565410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.565502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.565529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.565643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.565670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.565777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.565806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.565953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.565982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.566100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.566129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.566287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.566334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.566424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.566452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.566575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.566617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.566768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.566797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.566891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.566921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.567050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.567094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.567226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.567294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.567423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.567452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.567576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.567614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.567730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.567758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.567902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.567946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.568080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.568125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.568239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.568265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.568369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.568410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.568500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.568527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.568614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.568639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.568769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.568798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.568932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.568977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.569161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.569222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.569358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.569386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.569503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.569531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.569675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.569703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.569831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.569863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.052 [2024-10-13 01:46:39.570016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.052 [2024-10-13 01:46:39.570046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.052 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.570174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.570206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.570428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.570457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.570582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.570609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.570717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.570744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.570866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.570911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.571030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.571059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.571163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.571201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.571356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.571386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.571523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.571551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.571647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.571674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.571789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.571816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.571954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.571984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.572112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.572142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.572262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.572291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.572439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.572468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.572609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.572635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.572759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.572789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.572905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.572932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.573066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.573094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.573240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.573270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.573440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.573468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.573565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.573593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.573680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.573707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.573824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.573851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.573994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.574022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.574166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.574196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.574389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.574418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.574554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.574593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.574687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.574712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.574818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.574859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.575032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.575084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.575186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.575228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.575390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.575419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.575537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.575571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.575690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.575718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.575834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.575861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.575943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.575970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.576073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.576101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.576247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.576306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.576431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.576459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.576587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.576615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.576744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.576774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.053 [2024-10-13 01:46:39.576925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.053 [2024-10-13 01:46:39.576971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.053 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.577107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.577157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.577264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.577291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.577494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.577522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.577660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.577687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.577806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.577833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.577951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.577979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.578087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.578115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.578204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.578232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.578321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.578348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.578483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.578543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.578670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.578703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.578792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.578822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.578974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.579004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.579158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.579205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.579324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.579354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.579468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.579519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.579618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.579648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.579744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.579774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.579864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.579895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.580002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.580032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.580183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.580213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.580349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.580376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.580518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.580546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.580633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.580660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.580781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.580812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.580908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.580938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.581023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.581053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.581172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.581202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.581332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.581361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.581492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.581536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.581674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.581709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.581798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.054 [2024-10-13 01:46:39.581825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.054 qpair failed and we were unable to recover it. 00:35:54.054 [2024-10-13 01:46:39.581911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.055 [2024-10-13 01:46:39.581956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.055 qpair failed and we were unable to recover it. 00:35:54.055 [2024-10-13 01:46:39.582085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.055 [2024-10-13 01:46:39.582115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.055 qpair failed and we were unable to recover it. 00:35:54.055 [2024-10-13 01:46:39.582274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.055 [2024-10-13 01:46:39.582314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.055 qpair failed and we were unable to recover it. 00:35:54.055 [2024-10-13 01:46:39.582420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.055 [2024-10-13 01:46:39.582449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.055 qpair failed and we were unable to recover it. 00:35:54.055 [2024-10-13 01:46:39.582546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.055 [2024-10-13 01:46:39.582576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-13 01:46:39.582664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-13 01:46:39.582691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-13 01:46:39.582818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-13 01:46:39.582863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-13 01:46:39.582963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-13 01:46:39.582993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-13 01:46:39.583145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-13 01:46:39.583176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-13 01:46:39.583311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-13 01:46:39.583343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-13 01:46:39.583465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-13 01:46:39.583504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-13 01:46:39.583653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-13 01:46:39.583684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-13 01:46:39.583800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-13 01:46:39.583828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.338 qpair failed and we were unable to recover it. 00:35:54.338 [2024-10-13 01:46:39.583974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.338 [2024-10-13 01:46:39.584004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.584091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.584122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.584246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.584276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.584411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.584444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.584597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.584626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.584715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.584742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.584897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.584943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.585078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.585124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.585205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.585232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.585347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.585374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.585493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.585521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.585607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.585634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.585782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.585810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.585924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.585954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.586108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.586155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.586287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.586314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.586453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.586488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.586606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.586634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.586726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.586753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.586870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.586897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.586996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.587026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.587151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.587181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.587324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.587351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.587493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.587534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.587632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.587659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.587796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.587825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.588026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.588055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.588199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.588248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.588363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.588390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.588479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.588508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.588596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.588623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.588707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.588734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.588873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.588924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.589119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.589169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.589291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.589334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.589483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.589511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.589628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.589654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.589776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.589803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.589919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.589961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.590144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.339 [2024-10-13 01:46:39.590194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.339 qpair failed and we were unable to recover it. 00:35:54.339 [2024-10-13 01:46:39.590291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.590317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.590424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.590458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.590579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.590605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.590696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.590722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.590866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.590891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.591004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.591033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.591221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.591250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.591370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.591402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.591490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.591520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.591659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.591686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.591807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.591835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.591942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.591969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.592119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.592186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.592296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.592339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.592437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.592465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.592587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.592614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.592704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.592746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.592852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.592878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.592994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.593024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.593117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.593147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.593271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.593312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.593445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.593490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.593612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.593642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.593787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.593833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.593988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.594018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.594175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.594206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.594317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.594348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.594483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.594528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.594619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.594646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.594738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.594783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.594893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.594935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.595109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.595156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.595309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.595339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.595481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.595507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.595617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.595643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.595727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.595753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.595895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.595924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.596040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.596069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.596166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.596195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.596365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.340 [2024-10-13 01:46:39.596429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.340 qpair failed and we were unable to recover it. 00:35:54.340 [2024-10-13 01:46:39.596549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.596591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.596718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.596747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.596924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.596954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.597072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.597116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.597271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.597301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.597434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.597464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.597580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.597610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.597766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.597796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.597889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.597935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.598181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.598216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.598368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.598398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.598501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.598546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.598650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.598690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.598858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.598892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.599103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.599136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.599263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.599321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.599427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.599452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.599550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.599576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.599692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.599720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.599934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.599986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.600076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.600105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.600228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.600257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.600366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.600393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.600541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.600567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.600686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.600713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.600856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.600885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.600992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.601026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.601151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.601180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.601350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.601411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.601509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.601539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.601684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.601711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.601850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.601894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.602068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.602113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.602224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.602251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.602380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.602407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.602520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.602559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.602660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.602690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.602832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.602862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.603036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.603093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.603286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.341 [2024-10-13 01:46:39.603334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.341 qpair failed and we were unable to recover it. 00:35:54.341 [2024-10-13 01:46:39.603436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.603487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.603589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.603616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.603707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.603750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.603872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.603901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.604024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.604053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.604179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.604208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.604334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.604365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.604462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.604500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.604640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.604666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.604751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.604786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.604899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.604926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.605073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.605118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.605246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.605277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.605404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.605439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.605572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.605599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.605693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.605719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.605866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.605894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.606011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.606040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.606167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.606197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.606339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.606384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.606559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.606587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.606705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.606732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.606879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.606923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.607077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.607107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.607282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.607311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.607435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.607465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.607598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.607629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.607720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.607748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.607843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.607889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.608021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.608051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.608172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.608202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.608297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.608327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.608421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.608452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.608603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.608631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.608712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.608739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.608889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.608916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.609067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.609127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.609257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.609287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.609368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.609396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.609522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.609553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-10-13 01:46:39.609719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-10-13 01:46:39.609748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.609887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.609913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.610000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.610026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.610113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.610138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.610257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.610282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.610393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.610420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.610530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.610559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.610659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.610703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.610843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.610872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.611083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.611112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.611246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.611273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.611363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.611390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.611538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.611566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.611659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.611690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.611820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.611849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.611939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.611969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.612077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.612121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.612228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.612260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.612391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.612431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.612570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.612602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.612749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.612803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.612940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.612989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.613088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.613117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.613240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.613267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.613351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.613378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.613464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.613496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.613629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.613658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.613766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.613797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.613900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.613944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.614047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.614076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.614203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.614246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.614331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.614358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.614476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.614515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-10-13 01:46:39.614603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-10-13 01:46:39.614629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.614736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.614777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.614872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.614901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.615027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.615063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.615192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.615241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.615401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.615428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.615564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.615592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.615687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.615715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.615871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.615915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.616078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.616108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.616216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.616245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.616339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.616368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.616485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.616526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.616649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.616680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.616805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.616834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.616933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.616962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.617082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.617148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.617268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.617297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.617393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.617434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.617533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.617559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.617636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.617663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.617756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.617784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.617904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.617933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.618083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.618112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.618234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.618266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.618406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.618433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.618543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.618574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.618657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.618683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.618821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.618847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.618954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.618981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.619063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.619089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.619257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.619286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.619416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.619449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.619574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.619602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.619719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.619746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.619842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.619868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.620006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.620032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.620172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.620201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.620328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.620357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.620494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.620539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-10-13 01:46:39.620678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-10-13 01:46:39.620714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.620869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.620913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.621005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.621033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.621169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.621200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.621336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.621363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.621456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.621490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.621578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.621623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.621729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.621772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.621865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.621894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.621984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.622014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.622157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.622203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.622368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.622395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.622487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.622518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.622610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.622637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.622741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.622770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.622895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.622924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.623047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.623078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.623251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.623294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.623397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.623438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.623580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.623609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.623715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.623745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.623887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.623918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.624028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.624078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.624212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.624241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.624336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.624381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.624504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.624533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.624631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.624659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.624767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.624794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.624878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.624904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.625007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.625037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.625189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.625218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.625333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.625362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.625483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.625521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.625637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.625664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.625808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.625843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.625959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.626004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.626117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.626147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.626310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.626339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.626447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.626485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.626585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.626611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.626702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-10-13 01:46:39.626728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-10-13 01:46:39.626825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.626851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.626993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.627019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.627119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.627149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.627278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.627307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.627419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.627446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.627568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.627596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.627688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.627716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.627857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.627887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.627982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.628012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.628136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.628166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.628298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.628330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.628453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.628493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.628642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.628682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.628771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.628800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.628938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.628970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.629127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.629179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.629310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.629350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.629482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.629511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.629602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.629629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.629710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.629735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.629824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.629857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.629939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.629964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.630078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.630107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.630229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.630255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.630343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.630369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.630484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.630511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.630605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.630631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.630723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.630749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.630854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.630884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.631003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.631032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.631152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.631181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.631321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.631351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.631501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.631530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.631658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.631703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.631824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.631852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.631976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.632021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.632139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.632166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.632290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.632318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.632459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.632491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.632604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-10-13 01:46:39.632634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-10-13 01:46:39.632773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.632800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.632964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.632993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.633100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.633130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.633259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.633289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.633387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.633417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.633547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.633574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.633710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.633739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.633863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.633898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.634045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.634089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.634279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.634313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.634428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.634455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.634556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.634584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.634726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.634776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.634920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.634966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.635098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.635130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.635234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.635263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.635412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.635441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.635557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.635610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.635752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.635785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.635908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.635939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.636063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.636093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.636254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.636282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.636375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.636405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.636547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.636577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.636715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.636764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.636900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.636931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.637083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.637129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.637240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.637275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.637397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.637427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.637558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.637585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.637690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.637734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.637873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.637902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.638056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.638084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.638188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.638217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.638343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-10-13 01:46:39.638381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-10-13 01:46:39.638520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.638550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.638657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.638686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.638776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.638804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.638927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.638956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.639055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.639085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.639184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.639212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.639300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.639329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.639435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.639462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.639561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.639588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.639732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.639776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.639927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.639956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.640088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.640118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.640257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.640301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.640459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.640497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.640618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.640654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.640782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.640809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.640927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.640959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.641074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.641100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.641271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.641301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.641404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.641432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.641562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.641601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.641725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.641753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.641869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.641895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.641984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.642012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.642148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.642177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.642265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.642294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.642441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.642483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.642647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.642673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.642829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.642856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.642940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.642966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.643059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.643088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.643220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.643288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.643417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.643445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.643573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.643612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.643717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.643744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.643885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.643916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.644081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.644112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.644254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.644281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.644397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.644423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.644548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-10-13 01:46:39.644576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-10-13 01:46:39.644698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.644745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.644842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.644872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.644999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.645029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.645151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.645182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.645289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.645316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.645401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.645425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.645528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.645555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.645641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.645667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.645784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.645813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.645940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.645969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.646125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.646154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.646262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.646293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.646385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.646412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.646549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.646579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.646708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.646737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.646858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.646907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.647039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.647068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.647172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.647203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.647306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.647332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.647445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.647479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.647621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.647647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.647751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.647781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.647901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.647930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.648036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.648067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.648190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.648220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.648342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.648372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.648476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.648507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.648620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.648647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.648740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.648769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.648874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.648903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.649023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.649053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.649199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.649228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.649368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.649399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.649590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.649629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.649777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.649810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.649973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.650019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.650196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.650248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.650368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.650407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.650566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.650597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.650725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-10-13 01:46:39.650763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-10-13 01:46:39.650920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.650956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.651142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.651186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.651324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.651355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.651459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.651497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.651657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.651682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.651766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.651790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.651921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.651953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.652084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.652113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.652243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.652273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.652389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.652415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.652535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.652561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.652646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.652672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.652806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.652835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.652952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.652980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.653138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.653167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.653293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.653322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.653482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.653509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.653617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.653657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.653778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.653807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.653945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.653976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.654153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.654217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.654321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.654350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.654456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.654493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.654653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.654680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.654772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.654799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.654905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.654948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.655098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.655128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.655235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.655265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.655414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.655453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.655591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.655620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.655743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.655787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.655915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.655961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.656104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.656131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.656272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.656301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.656419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.656445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.656567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.656596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.656682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.656709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.656823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.656850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.656993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.657020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-10-13 01:46:39.657100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-10-13 01:46:39.657125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.657229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.657275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.657408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.657438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.657565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.657593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.657709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.657736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.657822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.657849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.657935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.657962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.658055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.658083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.658183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.658215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.658327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.658354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.658445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.658487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.658606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.658633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.658727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.658755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.658851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.658877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.658988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.659018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.659186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.659215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.659339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.659368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.659534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.659567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.659718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.659766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.659913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.659946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.660066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.660095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.660245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.660288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.660447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.660494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.660599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.660626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.660742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.660770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.660877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.660906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.661005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.661034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.661152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.661183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.661325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.661360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.661482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.661510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.661647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.661694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.661840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.661888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.662032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.662061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.662200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.662245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.662367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.662395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-10-13 01:46:39.662507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-10-13 01:46:39.662534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.662624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.662652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.662768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.662798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.662951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.662980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.663100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.663130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.663224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.663255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.663342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.663372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.663476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.663521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.663648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.663675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.663760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.663789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.663883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.663926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.664022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.664051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.664143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.664172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.664293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.664324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.664462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.664500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.664618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.664646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.664805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.664852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.664966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.665011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.665177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.665224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.665353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.665380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.665483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.665512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.665597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.665622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.665706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.665733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.665876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.665902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.665982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.666026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.666192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.666247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.666391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.666417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.666508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.666533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.666619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.666645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.666761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.666790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.666919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.666948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.667075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.667104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.667197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.667227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.667342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.667377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.667493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.667538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.667630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.667656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.667744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.667770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.667893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.667937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.668022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.668051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-10-13 01:46:39.668199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-10-13 01:46:39.668229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.668333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.668360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.668444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.668478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.668589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.668621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.668723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.668752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.668846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.668874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.669004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.669033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.669132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.669157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.669345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.669375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.669477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.669505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.669652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.669697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.669803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.669834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.670012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.670063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.670223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.670265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.670365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.670391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.670498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.670524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.670677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.670707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.670797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.670826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.670980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.671028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.671147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.671205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.671308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.671342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.671460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.671505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.671677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.671726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.671851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.671884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.672046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.672097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.672244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.672297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.672401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.672430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.672541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.672570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.672676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.672705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.672813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.672840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.672942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.672970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.673120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.673149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.673278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.673306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.673409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.673441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.673586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.673634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.673750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.673779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.673902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.673931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.674024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.674053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.674152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.674181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.674352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.674381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-10-13 01:46:39.674483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-10-13 01:46:39.674511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.674622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.674653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.674812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.674857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.674943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.674970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.675102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.675150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.675277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.675306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.675392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.675420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.675544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.675574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.675674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.675707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.675857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.675887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.676014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.676042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.676132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.676159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.676238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.676273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.676420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.676448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.676548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.676575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.676724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.676753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.676840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.676867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.676998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.677039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.677133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.677161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.677257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.677305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.677437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.677466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.677559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.677584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.677716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.677746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.677876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.677931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.678086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.678119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.678218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.678263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.678389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.678418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.678514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.678539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.678666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.678692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.678836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.678865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.678965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.678994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.679150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.679179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.679272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.679302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.679421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.679447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.679545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.679571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.679656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.679688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.679809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.679836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.679947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.679980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.680134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.680164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.680268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.680295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-10-13 01:46:39.680414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-10-13 01:46:39.680443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.680604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.680634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.680737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.680767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.680945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.680999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.681168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.681214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.681325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.681366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.681539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.681594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.681706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.681749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.681868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.681896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.682046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.682074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.682231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.682259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.682365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.682414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.682506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.682534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.682630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.682657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.682777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.682805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.682926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.682956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.683057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.683087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.683238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.683268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.683376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.683402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.683493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.683520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.683666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.683692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.683780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.683807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.683928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.683955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.684063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.684092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.684222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.684252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.684377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.684406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.684544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.684571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.684666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.684693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.684783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.684808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.684939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.684969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.685120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.685149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.685251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.685280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.685433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.685463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.685608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.685635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.685755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.685799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.685895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.685930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.686058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.686087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.686219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.686248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.686383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.686412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.686521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.686548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-10-13 01:46:39.686664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-10-13 01:46:39.686699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.686847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.686873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.686984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.687014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.687144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.687173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.687303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.687332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.687454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.687492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.687599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.687626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.687742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.687768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.687880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.687906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.688057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.688087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.688210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.688236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.688361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.688388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.688531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.688559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.688652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.688678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.688790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.688819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.688924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.688954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.689037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.689066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.689218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.689247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.689373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.689415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.689503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.689529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.689607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.689632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.689748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.689775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.689865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.689896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.689997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.690028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.690187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.690219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.690357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.690390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.690535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.690562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.690681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.690708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.690821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.690847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.690961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.690987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.691098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.691124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.691238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.691267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.691393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.691422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.691521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.691570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.691693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-10-13 01:46:39.691720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-10-13 01:46:39.691864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.691907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.692040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.692069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.692191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.692219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.692312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.692341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.692486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.692528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.692655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.692683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.692828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.692873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.692964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.692992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.693123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.693168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.693286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.693314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.693433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.693460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.693637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.693681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.693857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.693904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.694037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.694067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.694242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.694297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.694442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.694481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.694636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.694664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.694754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.694780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.694893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.694920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.695009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.695035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.695120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.695146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.695228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.695254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.695343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.695372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.695454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.695489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.695626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.695654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.695786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.695830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.695992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.696041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.696200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.696245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.696398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.696426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.696523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.696549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.696726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.696757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.696962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.696992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.697141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.697179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.697286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.697312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.697455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.697488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.697634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.697661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.697792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.697821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.697970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.697999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.698128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.698157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.698284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-10-13 01:46:39.698313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-10-13 01:46:39.698450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.698492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.698598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.698624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.698729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.698758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.698912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.698941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.699071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.699099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.699189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.699219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.699319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.699364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.699483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.699510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.699631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.699657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.699743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.699787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.699936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.699966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.700102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.700147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.700273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.700302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.700405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.700434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.700541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.700568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.700681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.700707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.700837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.700866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.700957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.701002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.701141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.701171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.701292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.701321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.701403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.701431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.701608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.701634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.701752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.701779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.701886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.701912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.702024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.702052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.702180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.702209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.702314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.702340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.702521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.702548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.702624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.702655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.702809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.702838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.703032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.703061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.703144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.703174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.703302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.703331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.703435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.703464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.703609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.703640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.703730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.703756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.703894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.703920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.704053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.704083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.704207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.704238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.704395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-10-13 01:46:39.704423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-10-13 01:46:39.704565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.704592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.704734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.704776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.704935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.704964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.705113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.705142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.705268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.705310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.705449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.705499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.705638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.705665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.705782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.705823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.705972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.706001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.706128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.706157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.706296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.706340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.706465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.706521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.706632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.706658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.706786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.706816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.706942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.706984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.707116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.707146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.707287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.707314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.707454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.707487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.707599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.707625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.707743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.707769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.707896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.707922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.708032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.708059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.708174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.708200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.708324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.708353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.708484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.708511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.708628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.708655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.708763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.708792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.708907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.708936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.709023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.709053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.709181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.709210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.709295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.709324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.709453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.709488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.709617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.709643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.709737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.709763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.709903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.709932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.710051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.710079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.710211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.710240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.710392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.710422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.710546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.710588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.710740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.710769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-10-13 01:46:39.710858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-10-13 01:46:39.710886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.710968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.710996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.711140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.711190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.711286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.711313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.711454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.711488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.711588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.711619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.711756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.711783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.711888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.711954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.712101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.712130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.712256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.712285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.712423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.712449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.712600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.712626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.712746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.712790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.712907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.712936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.713045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.713071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.713208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.713237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.713333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.713362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.713494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.713536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.713652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.713679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.713762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.713788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.713911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.713940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.714035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.714065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.714191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.714221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.714335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.714364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.714529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.714556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.714668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.714694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.714827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.714856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.714976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.715005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.715127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.715156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.715259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.715288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.715443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.715492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.715617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.715645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.715759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.715787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.715905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.715932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.716045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.716071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-10-13 01:46:39.716186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-10-13 01:46:39.716213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.716302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.716331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.716481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.716509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.716627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.716654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.716737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.716764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.716907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.716934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.717073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.717104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.717214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.717241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.717362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.717390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.717509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.717538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.717631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.717658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.717802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.717830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.717967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.717994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.718106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.718133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.718243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.718271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.718393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.718421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.718571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.718599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.718726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.718771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.718935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.718978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.719121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.719147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.719261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.719288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.719372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.719404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.719517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.719547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.719702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.719749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.719912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.719957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.720098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.720124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.720206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.720233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.720340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.720367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.720487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.720513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.720646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.720689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.720804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.720831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.720945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.720983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.721069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.721094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.721238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.721267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.721388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.721415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.721561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.721589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.721701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.721728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.721818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.721845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.721971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.721998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.722083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-10-13 01:46:39.722109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-10-13 01:46:39.722257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.722284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.722403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.722431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.722542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.722572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.722698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.722726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.722839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.722866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.722989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.723016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.723138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.723164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.723250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.723277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.723374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.723401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.723517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.723545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.723657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.723684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.723797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.723825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.723945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.723972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.724113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.724140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.724253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.724280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.724397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.724425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.724520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.724548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.724680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.724724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.724815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.724843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.724933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.724961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.725043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.725070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.725210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.725242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.725328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.725355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.725467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.725509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.725594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.725621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.725734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.725761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.725877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.725905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.726045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.726073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.726164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.726192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.726285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.726312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.726426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.726453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.726596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.726636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.726732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.726760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.726898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.726925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.727037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.727063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.727213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.727239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.727350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.727376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.727465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.727513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.727608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.727637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.727796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-10-13 01:46:39.727827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-10-13 01:46:39.728009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.728053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.728167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.728194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.728312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.728339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.728429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.728455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.728577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.728605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.728717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.728744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.728865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.728892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.729033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.729060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.729208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.729235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.729355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.729382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.729501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.729528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.729613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.729640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.729752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.729778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.729896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.729923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.730035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.730061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.730156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.730184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.730325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.730352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.730497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.730525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.730663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.730690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.730809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.730836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.730932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.730959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.731101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.731133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.731210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.731236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.731328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.731355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.731446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.731482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.731597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.731624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.731733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.731760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.731876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.731902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.732021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.732048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.732165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.732192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.732337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.732364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.732493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.732521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.732640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.732667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.732781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.732808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.732922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.732950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.733098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.733125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.733244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.733272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.733394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.733422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.733578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.733622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-10-13 01:46:39.733734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-10-13 01:46:39.733761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.733841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.733868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.733988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.734015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.734110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.734137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.734250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.734277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.734419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.734446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.734615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.734656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.734746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.734775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.734891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.734918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.735036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.735064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.735184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.735211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.735330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.735357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.735481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.735509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.735626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.735654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.735788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.735817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.735968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.735997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.736122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.736151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.736299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.736328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.736437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.736466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.736599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.736627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.736782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.736826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.736951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.736982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.737109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.737145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.737275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.737318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.737397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.737424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.737579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.737606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.737694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.737721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.737864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.737890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.738020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.738050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.738170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.738199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.738324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.738353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.738441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.738479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.738611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.738640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.738743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.738787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.738874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.738901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.739027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.739073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.739190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.739218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.739332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.739359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.739500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-10-13 01:46:39.739528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-10-13 01:46:39.739639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.739666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.739780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.739807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.739921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.739948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.740106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.740147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.740273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.740305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.740401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.740429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.740528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.740555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.740670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.740714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.740815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.740844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.740947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.740975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.741071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.741105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.741198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.741227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.741351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.741380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.741521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.741550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.741671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.741699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.741804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.741833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.741958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.741988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.742085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.742114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.742198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.742227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.742369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.742396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.742512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.742539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.742655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.742681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.742836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.742866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.743051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.743080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.743205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.743249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.743406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.743437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.743612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.743640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.743747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.743771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.743854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.743898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.744024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.744057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.744196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.744247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.744377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.744404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.744516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.744542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.744680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.744707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.744794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.744838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.744966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-10-13 01:46:39.745008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-10-13 01:46:39.745119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.745145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.745286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.745317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.745468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.745500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.745593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.745619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.745735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.745761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.745887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.745916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.746017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.746050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.746154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.746181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.746322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.746351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.746496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.746553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.746679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.746708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.746813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.746856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.746966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.746995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.747157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.747187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.747312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.747341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.747503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.747530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.747671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.747697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.747837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.747867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.747985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.748014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.748124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.748165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.748300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.748329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.748455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.748506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.748628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.748653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.748767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.748793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.748933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.748977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.749075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.749103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.749228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.749257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.749390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.749416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.749537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.749569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.749663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.749690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.749782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.749811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.749926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.749955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.750072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.750101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.750202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.750231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.750393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.750422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.750613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.750654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.750747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.750776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.750912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.750956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.751073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.751118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.751251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-10-13 01:46:39.751296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-10-13 01:46:39.751438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.751465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.751558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.751587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.751725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.751755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.751902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.751946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.752086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.752130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.752252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.752279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.752371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.752398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.752510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.752551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.752650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.752678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.752769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.752796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.752906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.752932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.753051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.753077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.753167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.753193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.753320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.753346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.753469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.753503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.753585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.753619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.753740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.753767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.753852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.753880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.754021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.754048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.754164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.754191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.754302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.754330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.754445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.754482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.754566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.754593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.754682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.754711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.754877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.754922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.755032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.755059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.755137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.755164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.755279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.755308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.755402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.755428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.755554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.755581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.755673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.755699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.755886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.755946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.756041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.756070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.756175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.756204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.756332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.756360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.756497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.756524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.756687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.756716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.756801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.756831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.756934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.756963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.757082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-10-13 01:46:39.757112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-10-13 01:46:39.757240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.757269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.757411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.757440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.757596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.757623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.757737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.757784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.757913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.757959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.758092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.758136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.758252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.758279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.758400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.758427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.758557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.758584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.758726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.758753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.758868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.758895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.759045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.759073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.759189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.759216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.759300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.759326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.759408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.759434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.759600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.759630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.759734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.759763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.759888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.759917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.760038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.760067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.760195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.760223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.760318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.760348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.760435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.760465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.760604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.760648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.760825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.760853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.760982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.761012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.761143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.761187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.761313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.761342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.761505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.761533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.761627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.761653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.761752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.761810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.761951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.761982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.762112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.762142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.762296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.762326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.762447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.762483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.762598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-10-13 01:46:39.762625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-10-13 01:46:39.762768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.762794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.762889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.762934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.763072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.763103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.763226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.763254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.763362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.763388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.763483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.763510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.763624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.763650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.763774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.763803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.763933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.763962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.764074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.764100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.764272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.764303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.764429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.764459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.764606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.764633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.764746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.764789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.764886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.764916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.765036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.765065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.765260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.765290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.765405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.765435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.765578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.765605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.765690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.765717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.765823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.765852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.765944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.765973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.766096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.766126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.766228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.766259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.766408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.766450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.766588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.766617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.766727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.766758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.766907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.766952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.767085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.767116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.767219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.767246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.767389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.767416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.767551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.767583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.767682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-10-13 01:46:39.767708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-10-13 01:46:39.767848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.767891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.768098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.768156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.768269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.768297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.768418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.768446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.768577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.768623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.768746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.768776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.768948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.769013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.769152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.769179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.769296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.769324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.769479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.769507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.769673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.769718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.769834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.769863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.769957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.769984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.770102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.770129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.770247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.770274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.770424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.770451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.770575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.770602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.770717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.770744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.770910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.770939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.771095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.771125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.771272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.771302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.771432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.771462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.771572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.771598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.771729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.771760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.771860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.771890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.772009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.772038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.772154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.772191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.772300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.772327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.772447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.772479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.772581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.772621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.772739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.772784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.772891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.772918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.773029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.773059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.773189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.773218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.773324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.773366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.773481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.773508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.773588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.773614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.773728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.773755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.773868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.773897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-10-13 01:46:39.774053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-10-13 01:46:39.774081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.774173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.774202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.774295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.774326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.774425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.774455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.774599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.774625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.774737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.774782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.774904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.774934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.775060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.775089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.775244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.775274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.775423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.775452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.775586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.775612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.775729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.775755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.775858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.775887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.776041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.776071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.776189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.776221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.776353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.776380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.776506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.776533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.776643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.776668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.776811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.776837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.777009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.777051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.777241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.777270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.777396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.777425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.777536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.777564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.777660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.777686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.777788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.777817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.777915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.777945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.778067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.778097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.778229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.778258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.778359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.778386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.778474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.778506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.778618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.778645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.778739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.778784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.778901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.778943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.779094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.779124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.779250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.779281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.779436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.779466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.779606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.779633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.779789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.779819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.779967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.780033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.780221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.780247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-10-13 01:46:39.780366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-10-13 01:46:39.780393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.780511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.780551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.780654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.780678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.780815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.780858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.780981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.781075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.781199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.781239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.781348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.781380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.781503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.781553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.781654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.781680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.781771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.781809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.781905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.781931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.782040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.782070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.782221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.782250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.782368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.782397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.782543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.782570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.782665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.782692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.782849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.782884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.783009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.783038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.783184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.783213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.783334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.783363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.783515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.783556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.783723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.783772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.783882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.783912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.784054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.784080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.784170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.784198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.784315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.784342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.784478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.784518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.784645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.784674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.784764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.784793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.784903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.784930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.785082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.785109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.785226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.785253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.785366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.785392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.785494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.785521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.785660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.785687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.785798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.785840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.785988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.786017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.786169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.786198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.786372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-10-13 01:46:39.786401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-10-13 01:46:39.786568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.786594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.786714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.786743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.786883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.786913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.787005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.787034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.787157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.787187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.787308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.787349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.787450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.787488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.787629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.787669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.787811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.787858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.787999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.788043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.788176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.788205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.788301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.788331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.788458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.788494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.788625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.788655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.788772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.788801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.788890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.788919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.789010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.789039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.789155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.789209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.789351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.789378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.789546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.789578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.789700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.789729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.789853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.789881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.789983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.790013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.790104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.790134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.790263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.790292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.790392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.790423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.790578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.790605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.790732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.790761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.790917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.790946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.791046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.791076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.791204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.791234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.791408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.791438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.791546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.791574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.791661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.791689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.791832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.791876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.792011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.792055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.792186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.792233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-10-13 01:46:39.792385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-10-13 01:46:39.792413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.792547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.792593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.792723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.792753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.792913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.792965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.793102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.793146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.793288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.793315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.793435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.793464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.793599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.793626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.793739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.793783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.793939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.793968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.794178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.794207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.794311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.794340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.794460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.794495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.794630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.794656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.794795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.794841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.794972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.795018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.795144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.795173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.795331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.795357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.795499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.795527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.795616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.795643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.795753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.795786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.795881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.795908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.796022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.796049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.796161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.796188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.796283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.796311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.796418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.796446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.796538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.796567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.796655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.796682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.796791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.796818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.796924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.796951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.797067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.797094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.797185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.797212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.797325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.797352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.797442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.797469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.797611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.797641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.797792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.797822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.797998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-10-13 01:46:39.798043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-10-13 01:46:39.798185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.798215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.798352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.798379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.798520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.798547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.798663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.798690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.798794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.798822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.799013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.799042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.799145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.799174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.799293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.799318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.799421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.799463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.799614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.799641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.799751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.799800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.799943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.799970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.800113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.800142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.800329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.800358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.800487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.800531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.800671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.800697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.800865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.800909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.801084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.801155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.801293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.801322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.801402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.801430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.801562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.801589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.801705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.801731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.801837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.801863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.801979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.802008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.802178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.802207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.802334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.802377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.802523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.802550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.802673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.802699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.802786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.802811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.802910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.802941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.803104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.803133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.803242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.803284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.803407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.803434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.803518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.803543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.803633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.803659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.803785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.803814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.803941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.803969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.804103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.804153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.804279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.804309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.804461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-10-13 01:46:39.804497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-10-13 01:46:39.804629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.804656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.804784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.804810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.804901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.804927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.805062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.805092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.805213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.805242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.805344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.805371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.805461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.805493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.805610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.805636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.805720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.805760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.805956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.805985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.806093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.806120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.806286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.806315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.806499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.806527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.806641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.806667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.806805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.806831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.807012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.807039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.807204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.807232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.807364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.807390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.807499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.807526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.807644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.807670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.807821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.807850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.808004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.808033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.808147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.808175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.808293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.808323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.808424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.808466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.808592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.808619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.808709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.808750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.808902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.808931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.809084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.809113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.809231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.809260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.809419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.809448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.809583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.809610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.809692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.809717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.809878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.809924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.810049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.810078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.810229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.810258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.810384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.810413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-10-13 01:46:39.810585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-10-13 01:46:39.810626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.810792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.810822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.810932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.810978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.811140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.811186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.811301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.811328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.811447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.811484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.811605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.811636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.811755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.811782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.811897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.811933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.812055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.812082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.812196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.812224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.812338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.812366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.812485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.812513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.812628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.812654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.812773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.812800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.812921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.812948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.813056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.813083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.813225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.813252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.813368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.813395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.813492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.813520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.813604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.813632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.813727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.813755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.813870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.813902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.814047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.814075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.814189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.814217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.814361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.814388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.814515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.814544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.814642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.814669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.814790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.814817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.814953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.814980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.815100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.815128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.815250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.815277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.815395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.815422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.815565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.815592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.815679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.815706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.815846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.815873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.816016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.816043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.816121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-10-13 01:46:39.816152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-10-13 01:46:39.816237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.816265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.816381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.816408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.816511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.816563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.816690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.816724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.816817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.816844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.816938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.816964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.817129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.817158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.817248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.817277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.817405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.817434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.817556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.817585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.817670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.817712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.817877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.817906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.818059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.818088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.818183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.818211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.818347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.818383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.818493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.818519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.818601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.818626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.818738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.818782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.818871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.818901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.819017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.819046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.819168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.819195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.819363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.819393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.819525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.819552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.819667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.819694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.819819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.819848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.819971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.819999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.820112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.820156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.820271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.820300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.820413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.820442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.820552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.820579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.820689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.820720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.820850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.820879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.820982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.821012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.821143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.821173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.821345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-10-13 01:46:39.821405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-10-13 01:46:39.821507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.821536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.821648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.821676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.821790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.821836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.821975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.822023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.822163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.822207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.822355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.822383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.822520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.822551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.822679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.822709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.822863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.822908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.823037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.823065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.823201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.823228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.823342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.823369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.823514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.823544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.823636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.823663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.823792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.823820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.823976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.824005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.824133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.824162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.824326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.824355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.824482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.824512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.824614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.824646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.824795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.824822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.824989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.825018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.825144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.825178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.825280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.825306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.827598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.827625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.827782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.827809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.827968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.827997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.828112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.828138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.828302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.828331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.828484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.828532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.828627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.828653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.828768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.828795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.828962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.829027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.829151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.829180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.829306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.829334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.829463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.829500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.829624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.829651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.829743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.829769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.829860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.829902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.830030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.830073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-10-13 01:46:39.830223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-10-13 01:46:39.830252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.830404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.830433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.830581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.830607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.830724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.830750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.830834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.830860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.830997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.831026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.831178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.831208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.831400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.831430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.831579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.831606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.831687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.831716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.831890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.831916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.832007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.832032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.832110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.832134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.832209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.832253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.832371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.832399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.832544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.832571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.832688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.832714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.832818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.832860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.832949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.832975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.833092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.833118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.833198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.833222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.833327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.833353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.833457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.833489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.833589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.833615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.833732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.833759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.833842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.833869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.833981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.834007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.834140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.834182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.834308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.834337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.834431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.834459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.834607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.834635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.834751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.834778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.834901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.834928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.835010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.835036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.835128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.835156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.835267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.835295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.835374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.835407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.835526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.835552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.835665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.835691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.835785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.835814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-10-13 01:46:39.835916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-10-13 01:46:39.835944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.836072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.836101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.836199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.836228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.836368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.836397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.836516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.836545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.836706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.836749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.836910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.836955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.837113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.837159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.837272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.837300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.837432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.837460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.837603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.837650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.837749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.837779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.837931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.837976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.838064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.838092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.838177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.838203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.838345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.838372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.838489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.838516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.838659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.838686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.838787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.838817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.838925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.838952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.839072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.839099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.839208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.839235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.839381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.839408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.839514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.839556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.839661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.839690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.839831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.839858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.840008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.840034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.840152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.840181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.840303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.840331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.840455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.840500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.840630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.840660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.840786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.840816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.840944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.840973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.841084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.841110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.841222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.841252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.841395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.841421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.841519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.841547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.841648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.841675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.841803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.841832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.842023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-10-13 01:46:39.842053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-10-13 01:46:39.842208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.842238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.842339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.842368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.842492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.842535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.842627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.842654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.842795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.842821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.842985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.843014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.843136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.843165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.843280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.843336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.843468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.843520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.843598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.843622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.843707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.843738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.843827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.843871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.843967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.843996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.844120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.844149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.844274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.844304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.844437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.844463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.844592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.844618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.844705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.844732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.844826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.844853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.844952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.844981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.845106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.845135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.845255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.845283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.845374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.845403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.845583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.845625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.845733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.845762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.845899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.845943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.846101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.846151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.846275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.846305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.846449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.846484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.846668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.846712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.846797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.846825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.846947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.846990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.847188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.847242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.847372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.847401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.847564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.847590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.847690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.847719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.847845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.847874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.847992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-10-13 01:46:39.848026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-10-13 01:46:39.848119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.848148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.848301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.848330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.848427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.848457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.848604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.848630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.848788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.848818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.848943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.848972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.849095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.849124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.849252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.849282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.849381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.849410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.849549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.849576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.849695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.849722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.849827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.849856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.850005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.850034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.850146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.850189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.850341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.850370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.850491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.850534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.850678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.850704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.850783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.850809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.850915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.850941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.851050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.851079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.851169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.851198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.851387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.851416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.851557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.851584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.851691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.851717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.851852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.851881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.851981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.852011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.852096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.852130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.852247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.852288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.852392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.852421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.852558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.852584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.852675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-10-13 01:46:39.852699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-10-13 01:46:39.852777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.852803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.852897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.852940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.853048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.853074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.853241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.853270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.853394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.853423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.853574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.853601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.853696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.853723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.853804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.853830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.853918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.853963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.854059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.854088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.854213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.854243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.854363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.854392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.854534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.854561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.854636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.854661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.854770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.854796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.854936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.854980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.855072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.855102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.855266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.855310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.855475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.855502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.855608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.855635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.855711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.855736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.855839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.855867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.855970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.855999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.856144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.856186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.856308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.856350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.856437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.856462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.856585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.856613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.856720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.856746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.856847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.856876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.857030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.857059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.857257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.857286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.857424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.857450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.857574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.857600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.857681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.857705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.857863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.857889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.858004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.858046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.858176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.858206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.858306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.858334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.858461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.858496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.858613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.858639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-10-13 01:46:39.858747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-10-13 01:46:39.858774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.858927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.858955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.859146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.859175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.859326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.859355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.859491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.859535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.859673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.859699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.859808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.859835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.859914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.859939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.860043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.860071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.860194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.860222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.860357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.860384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.860477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.860503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.860592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.860617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.860740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.860766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.860857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.860881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.860959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.860985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.861140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.861167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.861286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.861312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.861443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.861482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.861593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.861619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.861726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.861753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.861843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.861873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.862017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.862045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.862163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.862194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.862310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.862336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.862427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.862452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.862554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.862582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.862705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.862731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.862885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.862913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.863083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.863110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.863190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.863215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.863327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.863353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.863437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.863461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.863594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.863620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.863739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.863765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.863852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.863880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.864002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.864030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.864128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.864155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.864272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.864299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.864413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.864440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.864659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-10-13 01:46:39.864687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-10-13 01:46:39.864781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.864807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.864889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.864914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.865025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.865051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.865175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.865201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.865341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.865368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.865493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.865520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.865618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.865644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.865724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.865772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.865905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.865934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.866089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.866120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.866218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.866244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.866386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.866413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.866548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.866575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.866715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.866741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.866860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.866907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.867036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.867066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.867168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.867197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.867300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.867347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.867453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.867489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.867601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.867628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.867740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.867766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.867874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.867900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.867994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.868020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.868141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.868168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.868287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.868314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.868430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.868458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.868593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.868619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.868709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.868735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.868870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.868901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.869047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.869073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.869168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.869194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.869331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.869357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.869477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.869505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.869649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.869675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.869787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.869815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.869900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.869924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.870005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.870036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.870173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.870199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.870289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.870314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.870404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.870430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.870580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-10-13 01:46:39.870607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-10-13 01:46:39.870724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.870752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.870870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.870896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.870976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.871002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.871112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.871139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.871281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.871308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.871389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.871414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.871500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.871525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.871667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.871694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.871778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.871804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.871897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.871923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.872013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.872038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.872155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.872182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.872292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.872318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.872408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.872435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.872569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.872595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.872706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.872732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.872845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.872875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.872968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.872993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.873074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.873100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.873184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.873213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.873329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.873358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.873462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.873498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.873603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.873629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.873751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.873776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.873858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.873882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.874027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.874056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.874179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.874205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.874326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.874352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.874497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.874524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.874635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.874665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.874778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.874804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.874887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.874913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.875008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.875034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.875147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.875174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-10-13 01:46:39.875313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-10-13 01:46:39.875339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.875430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.875457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.875629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.875670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.875795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.875825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.875970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.876016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.876152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.876200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.876326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.876354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.876497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.876530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.876633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.876660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.876782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.876810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.876923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.876952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.877107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.877135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.877274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.877308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.877408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.877436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.877587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.877617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.877714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.877744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.877873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.877906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.878034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.878066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.878191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.878220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.878313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.878357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.878441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.878467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.878595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.878622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.878756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.878789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.878897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.878925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.879024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.879052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.879184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.879213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.879338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.879381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.879508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.879536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.879682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.879709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.879798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.879824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.879984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.880013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.880141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.880170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.880272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.880301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.880419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.880449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.880567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.880594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.880678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.880702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.880863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.880893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.881007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.881035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.881215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.881244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.881396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.881425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-10-13 01:46:39.881544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-10-13 01:46:39.881571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.881689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.881729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.881860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.881890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.882049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.882080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.882220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.882249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.882379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.882406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.882549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.882591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.882763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.882809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.882929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.882977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.883119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.883166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.883284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.883318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.883426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.883453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.883577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.883604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.883744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.883772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.883868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.883896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.884032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.884062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.884200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.884240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.884336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.884365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.884485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.884530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.884683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.884713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.884841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.884871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.885006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.885035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.885158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.885187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.885354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.885381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.885501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.885529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.885646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.885672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.885810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.885837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.885959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.885985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.886091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.886120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.886242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.886277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.886415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.886447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.886577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.886603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.886714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.886741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.886826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.886854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.886966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.886992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.887132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.887158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.887329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.887373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.887520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.887556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.887684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.887713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.887830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.887857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.887980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-10-13 01:46:39.888010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-10-13 01:46:39.888117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.888146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.888297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.888326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.888431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.888461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.888629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.888656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.888784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.888814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.888943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.888973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.889100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.889129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.889281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.889311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.889414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.889445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.889608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.889634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.889723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.889748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.889861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.889887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.889979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.890005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.890098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.890125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.890243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.890270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.890357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.890388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.890476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.890502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.890594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.890625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.890789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.890818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.890908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.890938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.891034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.891064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.891191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-10-13 01:46:39.891219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-10-13 01:46:39.891320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.891350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.891503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.891543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.891637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.891671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.891814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.891860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.891991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.892037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.892131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.892159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.892279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.892307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.892406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.892439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.892495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d52ab0 (9): Bad file descriptor 00:35:54.673 [2024-10-13 01:46:39.892651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.892679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.892768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.892795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.892889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.892916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.893043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.893085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.893223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.893253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.893352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.893381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.893512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.893539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.893681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.893710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.893844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.893873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.893986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.894015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.894113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.894142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.894233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.894268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.894409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.894439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.894545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.894574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.894682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.894713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.894840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.894884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.895054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.895104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.895192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.673 [2024-10-13 01:46:39.895221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.673 qpair failed and we were unable to recover it. 00:35:54.673 [2024-10-13 01:46:39.895362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.895390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.895484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.895511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.895613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.895648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.895749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.895777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.895860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.895903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.896026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.896056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.896186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.896215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.896346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.896375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.896494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.896540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.896665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.896695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.896824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.896853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.896950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.896980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.897076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.897109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.897211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.897241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.897351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.897381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.897495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.897522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.897639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.897666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.897771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.897815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.897922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.897951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.898047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.898076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.898172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.898210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.898342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.898373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.898478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.898524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.898606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.898634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.898766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.898795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.898887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.898916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.899039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.899068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.899168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.899198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.899294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.899323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.899458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.899492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.899582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.899608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.899694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.899720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.899827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.899856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.899954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.899986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.900111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.900144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.900255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.900299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.900426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.900455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.900633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.900659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.900767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.674 [2024-10-13 01:46:39.900797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.674 qpair failed and we were unable to recover it. 00:35:54.674 [2024-10-13 01:46:39.900926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.900956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.901111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.901141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.901302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.901351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.901468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.901508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.901623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.901651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.901785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.901832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.901918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.901953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.902054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.902081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.902198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.902225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.902319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.902348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.902475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.902503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.902621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.902647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.902776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.902802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.902915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.902960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.903091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.903120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.903212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.903258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.903376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.903403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.903490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.903516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.903604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.903645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.903805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.903835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.903936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.903966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.904091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.904120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.904250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.904280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.904423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.904453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.904610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.904640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.904760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.904786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.904876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.904904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.905060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.905118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.905267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.905297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.905405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.905435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.905582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.905611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.905715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.905746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.905895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.905926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.906076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.906121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.906232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.906260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.906400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.906429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.906557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.906603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.906737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.906788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-10-13 01:46:39.906941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-10-13 01:46:39.906968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.907087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.907120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.907241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.907270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.907384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.907412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.907512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.907541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.907656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.907683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.907758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.907804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.907890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.907920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.908021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.908050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.908200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.908228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.908358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.908396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.908540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.908568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.908690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.908719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.908818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.908848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.908972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.909002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.909131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.909161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.909289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.909320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.909462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.909495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.909614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.909640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.909728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.909753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.909895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.909925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.910053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.910082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.910201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.910229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.910335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.910369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.910506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.910533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.910648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.910675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.910759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.910804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.910931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.910960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.911048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.911078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.911201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.911230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.911359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.911385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.911503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.911530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.911641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.911668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.911809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.911837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.911939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.911970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.912122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.912151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.912303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.912332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.912454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.912516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.912632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.912658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.912800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-10-13 01:46:39.912830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-10-13 01:46:39.912953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.912983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.913072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.913102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.913269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.913326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.913490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.913523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.913644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.913672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.913834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.913865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.914052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.914083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.914180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.914222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.914358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.914398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.914534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.914561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.914642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.914667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.914832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.914862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.914986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.915013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.915118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.915146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.915276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.915307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.915439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.915466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.915564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.915590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.915684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.915710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.915873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.915903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.916025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.916054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.916186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.916216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.916333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.916378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.916546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.916576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.916721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.916774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.916919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.916965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.917052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.917079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.917202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.917230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.917326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.917354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.917486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.917513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.917652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.917680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.917803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.917835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.917939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.917968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-10-13 01:46:39.918116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-10-13 01:46:39.918146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.918272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.918302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.918432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.918462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.918603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.918630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.918741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.918767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.918854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.918898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.919033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.919063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.919160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.919191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.919328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.919358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.919481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.919515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.919642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.919669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.919757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.919782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.919914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.919944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.920116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.920145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.920297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.920326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.920412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.920441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.920631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.920671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.920818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.920852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.921030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.921077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.921222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.921275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.921399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.921426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.921547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.921593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.921725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.921756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.921861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.921888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.922014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.922046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.922173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.922202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.922313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.922338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.922454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.922494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.922588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.922616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.922709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.922736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.922825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.922859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.923008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.923035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.923123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.923154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.923269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.923310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.923438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.923466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.923591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.923634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.923759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.923789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.923890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.923920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-10-13 01:46:39.924015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-10-13 01:46:39.924045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.924147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.924173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.924297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.924328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.924418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.924442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.924559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.924608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.924712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.924747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.924898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.924947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.925062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.925106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.925241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.925269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.925389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.925416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.925508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.925536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.925665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.925694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.925831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.925872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.926021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.926061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.926217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.926245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.926387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.926415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.926584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.926631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.926768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.926815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.926915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.926943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.927088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.927115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.927235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.927263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.927394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.927422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.927563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.927594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.927722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.927751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.927842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.927871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.927959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.927988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.928086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.928116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.928226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.928256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.928373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.928418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.928541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.928585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.928684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.928714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.928864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.928893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.929056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.929108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.929322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.929381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.929511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.929544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.929662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-10-13 01:46:39.929689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-10-13 01:46:39.929776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.929820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.929982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.930036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.930244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.930296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.930422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.930451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.930566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.930593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.930708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.930745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.930838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.930881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.930970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.931000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.931145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.931175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.931314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.931347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.931464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.931504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.931660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.931689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.931828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.931873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.932017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.932048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.932228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.932286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.932371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.932404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.932550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.932579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.932711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.932756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.932909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.932941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.933077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.933121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.933225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.933270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.933386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.933415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.933539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.933566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.933652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.933678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.933781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.933810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.933900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.933936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.934031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.934061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.934187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.934222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.934352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.934383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.934478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.934523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.934611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.934638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.934776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.934805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.934959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.934989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-10-13 01:46:39.935141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-10-13 01:46:39.935170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.935335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.935366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.935501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.935528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.935628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.935657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.935753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.935783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.935895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.935922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.936115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.936164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.936249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.936290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.936398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.936424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.936528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.936555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.936676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.936709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.936817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.936846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.937021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.937051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.937163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.937204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.937360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.937389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.937496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.937522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.937610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.937637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.937757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.937787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.937926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.937955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.938081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.938124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.938255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.938284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.938447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.938482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.938581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.938607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.938730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.938756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.938904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.938930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.939048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.939092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.939214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.939243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.939337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.939382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.939501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.939529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.939637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.939663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.939821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.939850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.939985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.940014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.940117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.940146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.940264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.940294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.940390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.940433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.940563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.940590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.940705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-10-13 01:46:39.940732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-10-13 01:46:39.940872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.940899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.940983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.941010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.941157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.941219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.941349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.941377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.941495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.941522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.941610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.941635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.941728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.941756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.941853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.941882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.942008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.942038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.942164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.942195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.942286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.942315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.942411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.942445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.942597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.942631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.942756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.942786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.942909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.942940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.943038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.943068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.943192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.943221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.943314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.943357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.943505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.943532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.943648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.943674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.943794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.943840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.943993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.944022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.944148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.944182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.944312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.944343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.944492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.944522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.944685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.944736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.944872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.944902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.944999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.945028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.945157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.945186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.945308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.945338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.945431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.945486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-10-13 01:46:39.945617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-10-13 01:46:39.945643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.945745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.945772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.945924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.945954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.946075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.946104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.946256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.946305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.946452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.946503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.946654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.946682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.946844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.946874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.946975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.947004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.947130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.947163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.947284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.947313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.947458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.947496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.947592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.947620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.947752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.947782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.947939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.947987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.948157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.948203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.948289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.948317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.948467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.948501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.948647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.948696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.948876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.948917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.949114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.949172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.949338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.949392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.949535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.949564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.949653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.949681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.949851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.949893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.950045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.950085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.950240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.950281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.950461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.950527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.950653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.950681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.950817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.950847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.951008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.951059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.951233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.951262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.951383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.951413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.951540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.951572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.951673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.951727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.951856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.951898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.952137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.952179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.952364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-10-13 01:46:39.952405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-10-13 01:46:39.952570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.952598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.952741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.952770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.952891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.952918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.953063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.953115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.953278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.953324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.953492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.953549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.953673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.953701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.953835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.953862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.953980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.954006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.954089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.954133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.954262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.954292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.954417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.954447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.954626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.954654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.954772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.954798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.954929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.954958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.955075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.955104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.955235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.955265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.955373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.955403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.955521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.955548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.955664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.955691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.955855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.955892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.956048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.956078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.956198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.956227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.956353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.956396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.956496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.956523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.956609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.956635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.956751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.956778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.956895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.956921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.957021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.957050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.957148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.957177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.957308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.957337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.957439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.957467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.957621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.957648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.957740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.957767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.957887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.957916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.958006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.958037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.958170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.958199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.958299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.958328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-10-13 01:46:39.958460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-10-13 01:46:39.958498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.958605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.958632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.958746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.958772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.958882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.958925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.959050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.959079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.959233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.959262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.959393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.959423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.959540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.959567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.959674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.959704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.959863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.959898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.960027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.960057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.960171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.960201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.960285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.960311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.960438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.960466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.960593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.960639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.960720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.960752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.960885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.960913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.960997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.961024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.961156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.961196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.961297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.961336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.961458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.961493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.961580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.961608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.961713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.961743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.961878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.961907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.962018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.962070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.962177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.962205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.962329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.962359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.962498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.962548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.962689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.962718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.962816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.962845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.962971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.962999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.963128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.963160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.963253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.963284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.963395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.963425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.963552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.963589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.963682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.963710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.963847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.963899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.964038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.964091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.964207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.964234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.964352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.964380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-10-13 01:46:39.964521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-10-13 01:46:39.964550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.964634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.964661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.964825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.964871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.965002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.965042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.965166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.965194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.965310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.965338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.965448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.965484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.965655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.965687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.965786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.965815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.965937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.965966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.966135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.966164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.966261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.966291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.966403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.966431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.966589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.966617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.966706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.966732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.966875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.966904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.966999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.967031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.967133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.967165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.967315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.967345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.967499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.967528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.967664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.967709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.967802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.967835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.967969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.967999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.968170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.968225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.968325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.968355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.968463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.968530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.968705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.968749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.968930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.968989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.969155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.969207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.969318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.969344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.969437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.969485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.969634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.969662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.969774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.969803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.969904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.969933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.970115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.970145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-10-13 01:46:39.970238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-10-13 01:46:39.970268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.970376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.970406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.970590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.970639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.970779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.970808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.970921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.970966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.971056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.971084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.971239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.971292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.971375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.971402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.971543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.971591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.971691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.971718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.971807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.971840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.971962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.971989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.972071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.972097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.972186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.972216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.972309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.972337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.972457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.972493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.972622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.972652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.972804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.972830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.972916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.972942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.973031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.973059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.973180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.973211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.973289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.973318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.973407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.973434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.973554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.973581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.973678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.973704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.973842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.973872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.974005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.974036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.974164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.974194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.974324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.974359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.974463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.974499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.974632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.974659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.974766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-10-13 01:46:39.974795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-10-13 01:46:39.974920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.974949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.975099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.975128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.975255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.975284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.975415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.975446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.975611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.975654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.975844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.975891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.976036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.976071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.976184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.976212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.976334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.976361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.976453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.976487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.976628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.976674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.976811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.976857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.976971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.977002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.977165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.977192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.977289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.977317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.977409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.977436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.977571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.977616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.977744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.977774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.977941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.977993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.978086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.978115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.978299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.978348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.978440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.978478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.978595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.978624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.978722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.978760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.978921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.978954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.979056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.979085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.979202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.979242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.979369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.979399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.979525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.979553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.979640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.979666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.979754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.979798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.979899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.979928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.980044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.980070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.980190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.980235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.980378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.980406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.980508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.980535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.980654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.980681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.980854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.980884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-10-13 01:46:39.981037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-10-13 01:46:39.981067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.981185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.981214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.981349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.981380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.981519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.981559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.981659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.981708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.981800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.981829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.981949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.982008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.982164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.982215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.982344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.982373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.982485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.982515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.982615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.982642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.982720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.982745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.982911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.982959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.983057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.983088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.983234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.983279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.983398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.983426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.983521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.983548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.983702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.983735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.983834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.983867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.983992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.984051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.984181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.984238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.984338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.984365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.984454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.984492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.984660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.984688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.984786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.984817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.984949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.984978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.985142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.985194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.985290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.985318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.985403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.985430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.985559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.985586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.985719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.985748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.985879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.985908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.986034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.986064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.986200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.986229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.986330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.986360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.986498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.986543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.986646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.986674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.986770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.986799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.986890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.986920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-10-13 01:46:39.987075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-10-13 01:46:39.987104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.987225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.987255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.987374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.987403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.987530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.987560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.987746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.987791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.987893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.987924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.988052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.988081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.988237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.988266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.988366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.988395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.988501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.988546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.988661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.988688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.988782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.988808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.988890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.988917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.989041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.989077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.989179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.989209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.989298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.989328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.989434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.989461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.989581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.989608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.989726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.989753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.989886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.989916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.990026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.990053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.990194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.990225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.990360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.990387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.990521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.990563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.990679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.990708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.990812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.990844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.991034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.991089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.991283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.991335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.991467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.991501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.991620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.991648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.991737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.991763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.991879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.991906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.992023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.992049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.992152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.992180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.992283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.992327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.992443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.992477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.992563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.992588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.992671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.992698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.992787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.992814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.992958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.992986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-10-13 01:46:39.993096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-10-13 01:46:39.993129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.993298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.993346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.993515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.993546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.993666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.993693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.993830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.993861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.994031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.994062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.994246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.994296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.994427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.994453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.994549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.994579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.994692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.994719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.994811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.994852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.995020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.995066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.995237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.995288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.995428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.995455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.995583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.995610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.995702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.995732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.995829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.995856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.995950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.995977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.996061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.996088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.996191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.996222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.996347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.996376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.996495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.996538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.996674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.996700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.996821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.996847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.996958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.996984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.997125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.997155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.997258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.997287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.997417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.997447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.997565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.997595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.997713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.997741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.997825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.997852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.997968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.997998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.998096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-10-13 01:46:39.998123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-10-13 01:46:39.998217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.998247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.998408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.998438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.998593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.998624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.998760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.998800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.998891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.998921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.999039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.999067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.999197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.999228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.999361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.999397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.999548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.999576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.999689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.999716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.999807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.999834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:39.999955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:39.999982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.000098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.000151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.000270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.000300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.000396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.000429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.000607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.000638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.000727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.000754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.000885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.000915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.001052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.001097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.001192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.001217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.001312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.001338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.001432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.001457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.001557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.001584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.001676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.001701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.001790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.001815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.001905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.001929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.002043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.002068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.002143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.002168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.002247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.002272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.002386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.002411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.002495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.002530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.002612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.002653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.002753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.002781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.002897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.002939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.003077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.003109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.003209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.003256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.003346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.003373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.003489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.003530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.003625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.003655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-10-13 01:46:40.003777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-10-13 01:46:40.003823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.003962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.003992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.004176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.004206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.004335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.004366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.004496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.004541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.004662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.004689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.004817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.004865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.004960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.004989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.005123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.005149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.005259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.005303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.005431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.005478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.005582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.005610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.005703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.005733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.005853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.005881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.005981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.006011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.006136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.006167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.006281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.006312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.006421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.006450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.006549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.006576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.006668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.006693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.006784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.006811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.006926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.006953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.007052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.007082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.007219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.007249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.007373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.007418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.007538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.007566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.007660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.007688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.007791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.007821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.007958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.007990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.008129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.008177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.008330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.008359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.008483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.008528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.008624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.008651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.008736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.008762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.008847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.008875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.008997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.009034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.009153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.009181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.009357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-10-13 01:46:40.009388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-10-13 01:46:40.009501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.009529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.009641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.009668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.009750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.009776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.009908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.009938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.010026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.010056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.010206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.010250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.010393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.010422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.010518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.010545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.010636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.010663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.010758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.010785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.010896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.010923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.011024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.011053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.011167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.011200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.011363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.011391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.011508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.011537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.011637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.011667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.011845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.011874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.011959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.011987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.012090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.012117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.012244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.012271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.012354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.012381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.012477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.012505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.012596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.012622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.012698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.012723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.012814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.012848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.012979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.013009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.013140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.013169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.013257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.013290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.013397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.013425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.013546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.013573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.013665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.013691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.013772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.013814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.013941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.013970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.014104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.014134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.014247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.014276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.014384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.014410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.014525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.014566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.014659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.014688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-10-13 01:46:40.014818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-10-13 01:46:40.014845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.014927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.014954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.015084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.015131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.015232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.015263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.015399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.015427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.015553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.015580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.015708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.015736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.015843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.015885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.016022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.016051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.016151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.016181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.016278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.016307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.016405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.016434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.016554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.016581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.016686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.016720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.016839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.016871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.016970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.017000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.017117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.017145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.017276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.017308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.017421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.017466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.017591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.017621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.017734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.017784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.017945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.017989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.018151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.018197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.018324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.018352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.018450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.018485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.018590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.018617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.018732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.018762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.018886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.018942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.019122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.019172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.019305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.019333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.019441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.019478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.019592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.019618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.019728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.019756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.019866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.019925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.020045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.020082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.020200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.020233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.020341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.020377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.020504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.020546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-10-13 01:46:40.020678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-10-13 01:46:40.020724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.020855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.020888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.021015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.021051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.021151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.021180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.021312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.021338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.021419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.021444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.021548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.021574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.021676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.021704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.021854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.021882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.021987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.022015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.022109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.022137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.022224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.022252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.022404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.022432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.022579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.022609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.022727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.022771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.022862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.022888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.022986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.023012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.023130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.023155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.023245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.023271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.023365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.023390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.023538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.023570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.023685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.023711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.023829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.023854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.023954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.023979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.024059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.024083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.024170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.024200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.024297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.024324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.024418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.024445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.024557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.024583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.024680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.024706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.024843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.024883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.025008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.025035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.025202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.025245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.025379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.025408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.025545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-10-13 01:46:40.025571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-10-13 01:46:40.025698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.025723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.025811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.025836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.025976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.026001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.026085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.026112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.026286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.026328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.026447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.026483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.026605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.026629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.026810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.026865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.027014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.027064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.027165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.027215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.027352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.027379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.027493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.027520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.027653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.027697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.027841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.027884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.028004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.028031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.028196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.028249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.028374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.028417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.028532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.028556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.028650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.028678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.028780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.028807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.028953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.029002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.029171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.029224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.029313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.029339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.029464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.029509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.029634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.029661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.029775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.029803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.029948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.029998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.030112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.030164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.030302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.030329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.030426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.030452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.030604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.030634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.030762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.030790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.030899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.030927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.031077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.031105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.031248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.031281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.031393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.031420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.031566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.031592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.031685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.031709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.031872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.031922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-10-13 01:46:40.032037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-10-13 01:46:40.032064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.032179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.032208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.032340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.032365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.032450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.032482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.032580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.032606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.032716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.032741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.032832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.032857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.032957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.032984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.033111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.033152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.033285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.033313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.033439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.033479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.033589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.033614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.033727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.033754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.033857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.033884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.034000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.034028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.034140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.034182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.034341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.034369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.034484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.034509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.034626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.034668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.034783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.034811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.034895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.034923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.035019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.035047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.035158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.035190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.035310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.035348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.035442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.035468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.035599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.035628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.035757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.035786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.035898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.035926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.036055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.036096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.036232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.036259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.036358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.036382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.036462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.036505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.036584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.036609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.036739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.036767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.036871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.036895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.037064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.037094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.037203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.037245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.037356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.037386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.037517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.037564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.037664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.037690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-10-13 01:46:40.037777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-10-13 01:46:40.037803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.037883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.037909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.038021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.038054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.038206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.038236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.038339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.038366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.038455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.038488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.038611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.038637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.038723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.038748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.038858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.038883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.039009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.039045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.039144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.039171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.039297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.039323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.039406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.039431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.039582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.039607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.039721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.039747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.039848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.039876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.039995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.040044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.040167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.040196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.040340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.040365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.040449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.040479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.040568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.040593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.040674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.040699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.040810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.040839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.040974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.041015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.041198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.041226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.041350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.041377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.041555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.041580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.041713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.041742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.041841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.041869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.041990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.042018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.042193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.042251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.042379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.042408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.042557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.042585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.042687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.042715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.042810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.042836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.042977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.043004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.043120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.043148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.043242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.043269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.043383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-10-13 01:46:40.043411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-10-13 01:46:40.043566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.043593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.043711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.043737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.043823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.043867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.044016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.044045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.044172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.044209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.044311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.044339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.044448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.044482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.044569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.044596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.044701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.044731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.044856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.044900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.045062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.045113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.045243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.045272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.045374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.045401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.045513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.045539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.045621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.045646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.045790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.045816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.045945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.045974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.046136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.046165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.046264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.046291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.046432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.046460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.046584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.046609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.046723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.046751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.046909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.046955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.047090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.047142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.047234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.047260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.047348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.047375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.047488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.047529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.047631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.047659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.047806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.047836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.047948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.047976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.048149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-10-13 01:46:40.048179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-10-13 01:46:40.048313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.048341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.048484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.048515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.048645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.048674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.048770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.048797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.048923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.048953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.049094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.049137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.049268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.049297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.049453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.049506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.049685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.049729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.049861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.049917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.050023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.050053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.050189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.050238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.050384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.050429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.050563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.050594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.050762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.050813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.050952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.050996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.051194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.051224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.051353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.051380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.051542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.051572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.051744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.051794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.051930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.051961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.052114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.052160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.052251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.052280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.052448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.052487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.052633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.052660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.052755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.052783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.052872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.052900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.053021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.053051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.053177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.053207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.053357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.053387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.053541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.053581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.053726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.053772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.053860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.053886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.054033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.054084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.054195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.054222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.054308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.054335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-10-13 01:46:40.054447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-10-13 01:46:40.054479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.054599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.054626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.054734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.054761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.054870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.054898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.055014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.055041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.055155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.055183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.055280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.055305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.055388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.055413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.055495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.055521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.055635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.055662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.055760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.055785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.055898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.055925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.056038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.056065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.056169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.056209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.056335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.056362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.056512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.056540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.056628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.056652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.056770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.056796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.056934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.056961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.057086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.057115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.057210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.057235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.057353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.057380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.057481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.057507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.057640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.057690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.057818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.057847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.057979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.058007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.058126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.058153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.058233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.058258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.058353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.058378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.058459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.058494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.058604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.058630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.058739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.058769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.058895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.058924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.059014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.059042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.059189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.059217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.059345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.059374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.059477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.059505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.059648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.059674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.059756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.059797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.059889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.059916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-10-13 01:46:40.060012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-10-13 01:46:40.060039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.060145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.060171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.060342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.060372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.060511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.060539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.060648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.060678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.060822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.060868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.061027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.061072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.061199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.061247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.061379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.061418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.061554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.061582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.061675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.061704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.061864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.061893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.061985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.062012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.062131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.062160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.062270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.062316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.062439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.062487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.062614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.062644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.062729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.062772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.062873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.062903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.063058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.063088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.063215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.063246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.063383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.063410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.063519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.063546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.063671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.063716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.063881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.063925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.064066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.064093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.064206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.064232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.064321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.064348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.064496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.064523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.064645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.064674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.064763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.064791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.064895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.064921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.065062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.065105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.065228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.065260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.065396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.065426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.065564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.065591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.065669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.065694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.065858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.065908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.066118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.066157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-10-13 01:46:40.066299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-10-13 01:46:40.066330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.066463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.066498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.066603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.066631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.066758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.066787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.066908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.066937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.067024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.067051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.067182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.067222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.067378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.067407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.067562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.067589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.067701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.067727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.067834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.067863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.067976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.068017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.068186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.068215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.068344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.068373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.068523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.068564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.068705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.068736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.068890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.068919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.069046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.069075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.069285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.069314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.069406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.069433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.069551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.069584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.069678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.069708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.069876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.069923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.070115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.070165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.070308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.070335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.070450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.070483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.070599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.070629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.070729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.070761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.070882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.070911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.071006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.071033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.071180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.071230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.071331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.071359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.071465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.071497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.071585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.071611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.071755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.071781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.071906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.071935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.072058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.072088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.072203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-10-13 01:46:40.072232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-10-13 01:46:40.072373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.072404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.072523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.072550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.072632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.072657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.072783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.072813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.072929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.072959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.073084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.073114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.073266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.073298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.073425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.073452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.073555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.073581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.073679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.073709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.073892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.073922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.074100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.074145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.074269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.074297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.074425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.074453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.074610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.074636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.074765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.074794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.074922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.074964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.075104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.075134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.075220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.075265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.075353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.075379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.075458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.075488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.075603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.075628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.075767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.075796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.075932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.075961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.076121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.076152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.076312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.076370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.076502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.076531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.076641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.076679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.076838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.076868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.077014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.077058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.077167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.077197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.077340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.077367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.077487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.077514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.077652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.077680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-10-13 01:46:40.077839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-10-13 01:46:40.077868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.077995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.078024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.078172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.078201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.078350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.078379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.078545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.078571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.078686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.078713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.078850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.078878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.078982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.079010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.079120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.079149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.079261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.079287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.079453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.079491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.079650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.079676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.079780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.079808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.079937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.079966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.080085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.080112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.080229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.080258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.080379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.080408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.080526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.080560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.080697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.080731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.080865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.080900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.081020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.081062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.081195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.081238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.081362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.081405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.081565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.081620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.081758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.081796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.081958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.081990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.082120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.082150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.082251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.082280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.082415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.082440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.082547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.082574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.082665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.082691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.082824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.082852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.082944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.082973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.083050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.083079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.083201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.083230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.083345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.083374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.083500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.083542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.083655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.083681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-10-13 01:46:40.083822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-10-13 01:46:40.083850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.083941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.083970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.084097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.084126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.084277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.084306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.084401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.084430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.084557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.084585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.084703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.084729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.084863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.084892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.085006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.085035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.085130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.085165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.085262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.085290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.085414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.085443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.085557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.085584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.085699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.085725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.085833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.085859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.085936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.085962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.086085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.086114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.086242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.086268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.086370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.086409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.086582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.086610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.086729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.086755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.086848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.086875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.086963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.086990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.087141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.087167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.087253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.087281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.087404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.087429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.087557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.087583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.087671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.087697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.087832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.087857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.087974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.088001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.088110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.088137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.088312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.088338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.088441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.088485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.088616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.088642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.088734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.088775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.088920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.088946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.089040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.089072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.089217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.089246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-10-13 01:46:40.089395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-10-13 01:46:40.089422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.089541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.089568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.089682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.089709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.089812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.089855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.089992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.090018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.090187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.090213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.090350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.090376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.090493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.090520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.090614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.090641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.090758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.090784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.090901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.090927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.091038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.091065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.091184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.091212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.091333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.091376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.091543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.091587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.091704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.091731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.091851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.091878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.091994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.092021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.092135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.092162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.092273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.092300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.092445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.092480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.092614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.092641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.092725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.092751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.092855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.092884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.092994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.093020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.093163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.093195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.093314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.093358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.093486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.093514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.093611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.093637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.093752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.093778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.093867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.093892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.094006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.094033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.094162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.094191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.094294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.094320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.094436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.094463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.094614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.094641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.094729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.094756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.094863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.094889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.095015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-10-13 01:46:40.095044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-10-13 01:46:40.095186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.095212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.095338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.095378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.095509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.095555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.095695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.095722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.095880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.095912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.096052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.096082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.096225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.096252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.096376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.096404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.096515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.096542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.096648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.096674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.096763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.096790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.096928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.096954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.097041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.097068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.097153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.097186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.097298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.097328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.097451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.097495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.097633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.097660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.097774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.097801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.097886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.097913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.097999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.098026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.098160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.098204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.098323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.098349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.098486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.098513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.098628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.098655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.098767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.098794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.098911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.098937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.099090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.099119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.099235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.099261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.099408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.099435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.099593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.099619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.099764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.099791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.099916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.099943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.100034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.100060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.100148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.100174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-10-13 01:46:40.100263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-10-13 01:46:40.100291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.100387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.100416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.100543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.100570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.100688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.100714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.100848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.100877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.101014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.101040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.101145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.101172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.101263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.101290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.101407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.101433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.101553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.101581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.101695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.101721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.101845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.101875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.102025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.102055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.102169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.102195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.102308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.102334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.102434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.102463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.102585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.102611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.102691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.102717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.102831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.102858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.102974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.103001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.103088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.103115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.103228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.103254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.103335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.103361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.103483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.103515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.103602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.103629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.103716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.103743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.103861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.103887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.104048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.104078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.104247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.104275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.104362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.104390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.104531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.104557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.104672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.104698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.104781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.104807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.104942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.104975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.105089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.105115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.105235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.105261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.105407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.105436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.105577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.105604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-10-13 01:46:40.105742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-10-13 01:46:40.105770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.105921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.105945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.106059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.106085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.106169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.106194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.106282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.106322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.106431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.106457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.106613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.106640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.106736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.106761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.106879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.106904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.107028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.107053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.107194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.107222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.107337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.107362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.107485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.107512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.107630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.107655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.107735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.107760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.107838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.107864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.108003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.108031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.108164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.108188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.108304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.108329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.108465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.108514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.108626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.108651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.108743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.108768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.108873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.108902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.109020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.109045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.109209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.109237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.109334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.109361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.109494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.109521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.109639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.109664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.109745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.109770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.109909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.109935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.110017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.110043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.110155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.110181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.110299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.110325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.110444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.110511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.110641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.110666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.110783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.110808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.110933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.110959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.111092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.111120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.111239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.111275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.111420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.111459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-10-13 01:46:40.111577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-10-13 01:46:40.111606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.111701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.111737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.111879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.111905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.112011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.112041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.112156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.112182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.112305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.112331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.112419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.112444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.112550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.112576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.112659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.112685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.112794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.112829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.112968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.112995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.113090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.113116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.113276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.113301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.113413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.113456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.113570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.113604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.113754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.113790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.113934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.113991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.114122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.114150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.114278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.114322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.114408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.114434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.114538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.114566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.114655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.114681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.114800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.114826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.114948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.114974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.115096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.115121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.115207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.115232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.115345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.115371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.115490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.115516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.115645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.115671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.115761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.115788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.115882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.115907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.116001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.116027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.116119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.116145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.116270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.116295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.116417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.116447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.116567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.116602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.116714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.116749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.116917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.116955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.117095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.117151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.117268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.117307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-10-13 01:46:40.117430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-10-13 01:46:40.117457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.117612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.117658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.117754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.117782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.117887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.117914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.118044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.118090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.118184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.118210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.118327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.118353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.118486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.118525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.118624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.118651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.118743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.118775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.118866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.118893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.118979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.119006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.119102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.119131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.119234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.119262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.119356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.119382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.119500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.119526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.119635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.119662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.119760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.119786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.119913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.119939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.120066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.120092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.120234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.120259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.120344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.120370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.120490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.120536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.120668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.120697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.120824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.120851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.120979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.121005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.121086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.121112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.121197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.121223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.121315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.121342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.121440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.121465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.121573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.121601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.121711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.121740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.121833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.121859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.121967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.121993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.122072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.122097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.122235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.122262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.122363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.122402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.122520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.122548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.122638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.122663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.122804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-10-13 01:46:40.122832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-10-13 01:46:40.122963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.123002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.123129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.123168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.123309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.123343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.123491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.123529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.123665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.123705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.123880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.123925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.124034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.124065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.124192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.124226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.124328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.124355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.124475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.124503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.124603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.124630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.124719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.124764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.124887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.124916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.125040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.125069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.125165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.125194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.125286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.125315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.125438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.125466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.125603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.125629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.125743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.125770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.125858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.125884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.125988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.126017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.126120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.126163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.126270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.126302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.126415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.126441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.126532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.126559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.126644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.126670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.126762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.126790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.126901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.126927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.127093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.127121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.127238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.127266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-10-13 01:46:40.127357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-10-13 01:46:40.127386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.127494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.127521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.127614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.127640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.127726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.127752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.127847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.127875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.128026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.128055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.128145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.128181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.128282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.128326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.128414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.128440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.128562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.128591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.128685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.128713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.128819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.128849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.128937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.128965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.129066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.129095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.129218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.129247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.129398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.129428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.129536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.129563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.129683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.129709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.129789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.129815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1763301 Killed "${NVMF_APP[@]}" "$@" 00:35:54.715 [2024-10-13 01:46:40.129921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.129950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.130060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.130085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.130223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:54.715 [2024-10-13 01:46:40.130252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.130380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.130409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:54.715 [2024-10-13 01:46:40.130525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.130552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:54.715 [2024-10-13 01:46:40.130685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.130712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:54.715 [2024-10-13 01:46:40.130825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.130854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.130976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.131019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.131141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.131170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.131276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.131306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.131419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.131446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.131552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.131583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.131695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.131721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.131874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.131899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.132027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.132055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.132141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.132169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.132282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.132308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.132442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.132478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-10-13 01:46:40.132599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-10-13 01:46:40.132625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.132742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.132768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.132862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.132887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.132993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.133036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.133176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.133205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.133329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.133357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.133498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.133558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.133682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.133725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.133894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.133941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.134079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.134124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.134249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.134294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.134375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.134402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.134550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.134581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.134737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.134781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.134881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.134907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.134996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.135023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1763772 00:35:54.716 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:54.716 [2024-10-13 01:46:40.135110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.135137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1763772 00:35:54.716 [2024-10-13 01:46:40.135230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.135257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.135353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.135386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1763772 ']' 00:35:54.716 [2024-10-13 01:46:40.135503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.135531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:54.716 [2024-10-13 01:46:40.135613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.135640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:54.716 [2024-10-13 01:46:40.135719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.135746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.135826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:54.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:54.716 [2024-10-13 01:46:40.135853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:54.716 [2024-10-13 01:46:40.135985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.136012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:54.716 [2024-10-13 01:46:40.136095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.136125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.136228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.136254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-10-13 01:46:40.136339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-10-13 01:46:40.136365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.136461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.136494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.136585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.136611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.136733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.136759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.136888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.136914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.137025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.137052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.137131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.137159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.137247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.137274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.137375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.137402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.137495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.137521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.137613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.137639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.137723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.137750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.137864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.137890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.137979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.138005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.138090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.138117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.138205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.138231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.138349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.138375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.138467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.138501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.138596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.138622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.138706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.138742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.138857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.138883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.138960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.138987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.139094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.139120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.139210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.139236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.139377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.139404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.139498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.139525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.139613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.139639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.139720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.139747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.139841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.139868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.140062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.140092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.140211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.140237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.140354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.140380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.140572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.140599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.140686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.140712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.140804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.140830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.140940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.140966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.141082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.141109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.141199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.141226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-10-13 01:46:40.141311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-10-13 01:46:40.141337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.141434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.141460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.141567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.141594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.142617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.142650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.142753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.142781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.142907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.142934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.143076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.143102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.143200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.143228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.143427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.143454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.143557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.143583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.143702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.143728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.143817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.143843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.143951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.143977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.144061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.144087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.144175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.144202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.144292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.144319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.144435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.144461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.144564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.144592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.144694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.144720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.144805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.144832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.144948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.144975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.145096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.145122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.145235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.145261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.145352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.145380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.145464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.145499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.145589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.145615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.145691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.145717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.145825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.145851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.145974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.146000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.146089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.146115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.146201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.146229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.146337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.146380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.146507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.146536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.146624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.146650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.146742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.146768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.146869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.146913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.147064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.147093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.147198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.147225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.147315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.147341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.147429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.718 [2024-10-13 01:46:40.147455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.718 qpair failed and we were unable to recover it. 00:35:54.718 [2024-10-13 01:46:40.147579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.147607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.147702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.147731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.147824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.147852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.147972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.148000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.148124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.148152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.148256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.148284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.148419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.148448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.148540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.148567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.148673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.148703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.148843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.148873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.148999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.149043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.149160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.149186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.149272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.149299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.149387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.149413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.149507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.149534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.149622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.149648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.149739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.149766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.149854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.149880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.149966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.149992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.150119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.150146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.150260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.150287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.150376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.150402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.150488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.150516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.150604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.150630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.719 [2024-10-13 01:46:40.150714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.719 [2024-10-13 01:46:40.150740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.719 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.150824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.150850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.150959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.150986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.151079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.151105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.151181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.151207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.151313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.151339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.151428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.151454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.151547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.151579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.151668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.151694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.151808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.151835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.151981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.152007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.152092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.152120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.152234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.152259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.152348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.152374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.152496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.152523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.152612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.152640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.152731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.152756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.152870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.152895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.152977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.153004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.153095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.153121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.153209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.153235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.153332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.153357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.153451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.153482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.153593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.153619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.153728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.153755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.153884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.153912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.154064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.154109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.154223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.154249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.154333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.154360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.154445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.154500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.154588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.154614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.154705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.154733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.154853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.154880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.154964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.154990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.155081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.155107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.155223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.155250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.155342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.155367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.155463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.155494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.720 [2024-10-13 01:46:40.155585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.720 [2024-10-13 01:46:40.155610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.720 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.155688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.155713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.155818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.155845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.155926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.155950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.156061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.156087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.156202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.156227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.156313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.156339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.156446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.156482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.156576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.156602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.156714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.156745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.156856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.156882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.156976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.157002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.157127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.157154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.157273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.157300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.157391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.157418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.157517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.157543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.157628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.157655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.157736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.157762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.157854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.157880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.157960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.157986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.158096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.158125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.158233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.158262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.158353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.158379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.158500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.158527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.158638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.158665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.158742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.158767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.158854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.158879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.159003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.159030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.159115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.159141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.159259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.159286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.159407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.159434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.159532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.159558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.159653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.159679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.159769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.159795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.159911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.159936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.160078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.160104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.160226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.160254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.160346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.160371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.160459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.160492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.160576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.721 [2024-10-13 01:46:40.160602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.721 qpair failed and we were unable to recover it. 00:35:54.721 [2024-10-13 01:46:40.160720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.160746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.160829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.160864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.161003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.161030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.161123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.161150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.161235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.161266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.161384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.161410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.161532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.161558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.161651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.161676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.161767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.161796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.161894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.161924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.162021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.162047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.162139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.162165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.162254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.162281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.162378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.162404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.162505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.162531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.162613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.162638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.162724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.162750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.162895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.162921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.163033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.163059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.163184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.163211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.163339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.163365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.163447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.163479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.163597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.163624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.163828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.163855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.163947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.163972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.164098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.164124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.164222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.164252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.164402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.164429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.164527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.164554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.164645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.164670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.164760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.164786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.164869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.164896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.165022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.165047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.165160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.165187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.165301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.165327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.165438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.165463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.165591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.165621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.165714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.165741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.165853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.165880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.166002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.722 [2024-10-13 01:46:40.166037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.722 qpair failed and we were unable to recover it. 00:35:54.722 [2024-10-13 01:46:40.166172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.166208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.166325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.166360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.166466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.166505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.166594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.166620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.166715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.166740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.166900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.166926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.167037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.167078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.167170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.167199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.167293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.167319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.167408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.167439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.167551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.167583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.167673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.167701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.167785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.167811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.167902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.167928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.168043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.168068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.168178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.168203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.168304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.168329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.168408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.168433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.168539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.168565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.168654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.168679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.168807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.168832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.168929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.168955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.169079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.169105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.169200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.169225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.169315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.169343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.169422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.169447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.169556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.169590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.169688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.169715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.169806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.169832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.169929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.169956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.170038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.170074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.170217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.170252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.170374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.170400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.170515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.170544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.723 [2024-10-13 01:46:40.170640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.723 [2024-10-13 01:46:40.170665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.723 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.170795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.170822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.170934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.170964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.171053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.171081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.171195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.171221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.171305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.171330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.171416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.171442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.171589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.171616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.171697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.171722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.171825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.171851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.171952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.171977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.172124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.172149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.172243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.172268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.172351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.172381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.172514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.172540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.172624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.172652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.172763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.172789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.172874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.172900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.172985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.173010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.173105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.173130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.173216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.173242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.173390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.173415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.173520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.173547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.173630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.173655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.173784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.173809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.173924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.173949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.174034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.174060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.174150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.174175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.174257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.174282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.174410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.174436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.174539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.174567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.174655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.174680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.174766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.174792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.174924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.174949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.175052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.175077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.175204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.175242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.175378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.175405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.175513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.175543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.175670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.175697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.175827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.175852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.175976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.724 [2024-10-13 01:46:40.176003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.724 qpair failed and we were unable to recover it. 00:35:54.724 [2024-10-13 01:46:40.176119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.176144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.176264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.176289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.176422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.176475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.176576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.176603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.176699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.176724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.176817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.176841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.176939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.176965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.177052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.177078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.177164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.177189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.177274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.177300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.177393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.177420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.177523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.177550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.177678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.177704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.177782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.177807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.177954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.177979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.178070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.178100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.178224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.178250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.178379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.178405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.178523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.178550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.178637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.178663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.178753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.178778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.178871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.178898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.179002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.179029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.179172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.179198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.179294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.179319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.179407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.179432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.179540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.179567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.179659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.179685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.179768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.179793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.179915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.179941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.180021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.180047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.180162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.180188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.180278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.180317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.180412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.180438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.180541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.180567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.180661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.180686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.180768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.180793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.180879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.180904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.180988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.181013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.181106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.181132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.725 qpair failed and we were unable to recover it. 00:35:54.725 [2024-10-13 01:46:40.181270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.725 [2024-10-13 01:46:40.181296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.181383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.181408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.181506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.181532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.181625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.181651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.181739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.181764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.181853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.181879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.181977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.182002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.182089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.182114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.182226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.182251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.182333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.182358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.182467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.182499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.182588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.182613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.182708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.182736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.182849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.182875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.182992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.183018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.183104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.183128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.183247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.183273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.183396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.183422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.183526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.183553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.183642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.183667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.183755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.183787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.183877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.183903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.183990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.184016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.184132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.184157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.184297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.184323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.184414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.184439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.184544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.184570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.184652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.184677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.184790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.184816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.184907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.184933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.185024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.185049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.185156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.185181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.185282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.185308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.185400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.185425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.185532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.185572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.185665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.185693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.185822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.185848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.185975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.186003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.186097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.186124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.186242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.186268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.186381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.726 [2024-10-13 01:46:40.186407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.726 qpair failed and we were unable to recover it. 00:35:54.726 [2024-10-13 01:46:40.186516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.186542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.186625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.186650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.186743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.186769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.186886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.186912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.187003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.187032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.187112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.187138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.187222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.187248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.187339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.187365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.187480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.187506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.187590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.187616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.187704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.187730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.187728] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:35:54.727 [2024-10-13 01:46:40.187802] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:54.727 [2024-10-13 01:46:40.187883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.187907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.187997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.188021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.188095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.188119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.188213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.188241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.188331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.188356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.188435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.188464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.188568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.188592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.188699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.188724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.188813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.188838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.188974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.188999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.189079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.189104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.189223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.189249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.189335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.189361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.189454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.189487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.189576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.189602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.189693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.189718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.189846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.189871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.189987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.190012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.190101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.190127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.190236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.190261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.190350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.190375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.190485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.190512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.190600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.190625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.190709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.190734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.190859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.190884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.190997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.191023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.191128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.191153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.191292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.191331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.727 qpair failed and we were unable to recover it. 00:35:54.727 [2024-10-13 01:46:40.191479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.727 [2024-10-13 01:46:40.191521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.191625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.191652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.191746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.191778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.191903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.191932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.192060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.192086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.192225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.192251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.192365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.192391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.192494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.192520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.192628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.192654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.192738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.192764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.192860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.192885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.193027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.193054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.193178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.193208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.193352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.193379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.193501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.193527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.193613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.193639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.193755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.193803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.193899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.193925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.194049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.194075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.194166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.194192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.194280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.194306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.194408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.194447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.194545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.194572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.194658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.194684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.194768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.194795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.194910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.194936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.195011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.195036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.195128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.195156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.195269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.195297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.195387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.195416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.195517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.195544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.195660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.195686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.195763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.195789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.195879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.195905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.195988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.728 [2024-10-13 01:46:40.196014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.728 qpair failed and we were unable to recover it. 00:35:54.728 [2024-10-13 01:46:40.196108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.196134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.196251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.196279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.196377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.196403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.196504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.196530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.196648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.196674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.196763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.196789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.196911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.196936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.197020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.197051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.197139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.197166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.197260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.197299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.197390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.197417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.197509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.197535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.197622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.197647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.197840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.197871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.198008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.198034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.198121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.198148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.198225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.198251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.198352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.198400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.198513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.198541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.198655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.198680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.198768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.198793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.198928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.198954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.199037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.199063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.199154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.199179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.199261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.199286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.199371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.199396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.199510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.199536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.199730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.199755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.199867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.199892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.200953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.200994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.201135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.201163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.201283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.201309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.201411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.201437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.201567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.201593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.201685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.201715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.201817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.201842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.201955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.201980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.202113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.202154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.202266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.202293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.202386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.729 [2024-10-13 01:46:40.202412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.729 qpair failed and we were unable to recover it. 00:35:54.729 [2024-10-13 01:46:40.202497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.202523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.202618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.202645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.202740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.202766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.202880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.202906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.202999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.203025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.203156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.203195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.203324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.203352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.203444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.203480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.203572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.203599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.203685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.203712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.203806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.203831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.203924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.203951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.204066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.204091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.204188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.204213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.204293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.204319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.204413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.204452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.204556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.204584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.204672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.204697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.204786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.204811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.204898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.204924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.205015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.205041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.205159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.205186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.205311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.205350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.205448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.205482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.205602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.205628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.205723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.205749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.205846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.205873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.205955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.205981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.206099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.206126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.206256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.206286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.206404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.206431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.206562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.206589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.206682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.206708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.206802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.206828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.206922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.206955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.207151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.207178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.207268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.207294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.207394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.207420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.207509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.207536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.207630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.730 [2024-10-13 01:46:40.207655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.730 qpair failed and we were unable to recover it. 00:35:54.730 [2024-10-13 01:46:40.207747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.207772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.207869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.207895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.208012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.208037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.208143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.208170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.208360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.208385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.208476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.208502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.208616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.208642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.208730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.208756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.208892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.208930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.209027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.209059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.209160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.209200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.209300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.209329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.209416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.209443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.209582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.209610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.209695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.209721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.209844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.209878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.209979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.210006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.210125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.210152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.210359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.210386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.210485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.210510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.210601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.210627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.210736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.210775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.210903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.210932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.211013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.211040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.211133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.211159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.211244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.211280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.211374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.211402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.211514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.211542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.211640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.211666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.211761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.211786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.211893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.211918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.212009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.212035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.212123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.212150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.212238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.212266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.212367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.212398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.212499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.212527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.212608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.212635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.212719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.212749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.212833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.212859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.212952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.212980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.731 [2024-10-13 01:46:40.213073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.731 [2024-10-13 01:46:40.213100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.731 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.213221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.213249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.213330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.213355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.213455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.213485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.213600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.213626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.213718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.213744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.213858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.213883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.213965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.213994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.214089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.214116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.214200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.214226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.214313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.214339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.214426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.214451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.214552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.214580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.214672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.214698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.214810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.214836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.215030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.215057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.215143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.215170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.215286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.215313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.215406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.215433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.215536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.215563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.215643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.215668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.215792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.215821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.215929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.215955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.216059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.216083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.216162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.216188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.216264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.216290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.216406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.216435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.216528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.216556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.216647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.216674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.216802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.216828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.216921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.216948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.217057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.217084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.217168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.217194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.217281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.217309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.217396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.217421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.217520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.217549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.732 [2024-10-13 01:46:40.217631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.732 [2024-10-13 01:46:40.217656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.732 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.217738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.217763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.217852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.217877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.217959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.217987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.218067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.218093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.218206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.218233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.218341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.218367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.218447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.218481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.218612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.218637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.218728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.218753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.218862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.218887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.219021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.219046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.219132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.219159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.219236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.219261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.219348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.219373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.219508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.219533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.219625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.219650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.219730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.219755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.219840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.219864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.219945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.219970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.220090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.220119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.220199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.220225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.220306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.220334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.220419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.220446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.220573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.220600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.220686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.220712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.220809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.220835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.220926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.220951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.221036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.221063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.221178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.221204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.221310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.221334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.221455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.221488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.221582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.221608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.221722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.221748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.221838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.221863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.221977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.222003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.222135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.222174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.222295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.222322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.222413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.222441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.222542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.222569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.222663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.733 [2024-10-13 01:46:40.222690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.733 qpair failed and we were unable to recover it. 00:35:54.733 [2024-10-13 01:46:40.222793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.222818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.222911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.222936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.223038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.223064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.223157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.223182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.223270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.223296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.223375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.223401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.223506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.223532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.223627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.223653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.223734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.223759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.223846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.223872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.223959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.223985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.224096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.224131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.224227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.224257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.224372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.224398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.224515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.224541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.224625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.224652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.224745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.224793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.224890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.224918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.225027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.225053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.225136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.225162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.225276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.225302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.225438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.225483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.225573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.225600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.225686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.225712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.225833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.225859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.225988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.226017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.226136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.226167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.226258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.226285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.226392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.226418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.226546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.226573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.226666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.226692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.226784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.226810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:54.734 qpair failed and we were unable to recover it. 00:35:54.734 [2024-10-13 01:46:40.226939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.734 [2024-10-13 01:46:40.226964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-10-13 01:46:40.227053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-10-13 01:46:40.227081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-10-13 01:46:40.227170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-10-13 01:46:40.227197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-10-13 01:46:40.227335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-10-13 01:46:40.227361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-10-13 01:46:40.227443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.227479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.227567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.227594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.227681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.227710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.227818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.227854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.227937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.227963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.228056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.228082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.228168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.228196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.228313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.228339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.228451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.228492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.228581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.228607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.228688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.228716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.228841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.228867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.228985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.229013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.229108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.229135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.229250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.229277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.229364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.229396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.229511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.229540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.229618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.229644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.229730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.229757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.229853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.229878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.229957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.229983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.230068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.230093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.230183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.230211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.230325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.230352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.230437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.230463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.230558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.230585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.230678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.230704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.230824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.230850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.230969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.230995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.231085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.231112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.231190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.231216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.231348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.231374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.231455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.231501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.231591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.231616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.231696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.231722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.231813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.231841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.231924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.231958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.232056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.232081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.232189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.232214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-10-13 01:46:40.232325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-10-13 01:46:40.232350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.232446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.232480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.232566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.232593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.232689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.232729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.232829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.232858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.232948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.232975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.233089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.233116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.233243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.233270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.233377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.233403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.233514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.233542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.233631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.233658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.233753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.233779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.233870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.233896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.233977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.234005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.234092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.234117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.234203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.234228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.234387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.234418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.234558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.234584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.234669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.234693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.234786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.234813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.234897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.234924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.235010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.235038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.235151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.235178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.235291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.235318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.235457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.235489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.235578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.235605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.235690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.235718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.235807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.235834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.235930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.235956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.236041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.236068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.236186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.236212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.236336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.236365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.236461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.236492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.236583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.236609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.236696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.236721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.236854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.236880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.236994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.237020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.237139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.237165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.237263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.237303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-10-13 01:46:40.237396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-10-13 01:46:40.237426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.237543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.237572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.237655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.237682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.237820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.237846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.237936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.237965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.238064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.238092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.238182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.238208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.238324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.238349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.238428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.238452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.238555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.238581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.238671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.238696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.238814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.238839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.238924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.238949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.239063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.239088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.239202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.239230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.239336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.239375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.239499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.239528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.239619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.239647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.239767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.239794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.239875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.239901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.240021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.240048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.240127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.240154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.240271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.240298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.240405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.240432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.240538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.240565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.240655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.240682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.240762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.240789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.240914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.240940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.241053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.241080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.241215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.241243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.241362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.241390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.241498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.241526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.241620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.241648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.241754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.241780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.241908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.241935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.242031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.242058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.242175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.242203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.242321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.242348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.242462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.242494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.242589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.242615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-10-13 01:46:40.242696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-10-13 01:46:40.242722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.242879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.242906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.243021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.243048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.243164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.243192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.243331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.243364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.243489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.243518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.243636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.243662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.243779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.243805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.243919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.243945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.244040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.244065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.244157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.244184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.244302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.244329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.244439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.244481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.244570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.244597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.244704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.244744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.244854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.244883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.245014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.245041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.245121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.245147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.245238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.245265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.245380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.245408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.245543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.245570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.245654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.245681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.245769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.245797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.245917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.245943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.246030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.246057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.246163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.246189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.246307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.246333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.246430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.246487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.246639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.246668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.246785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.246812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.246931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.246958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.247073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.247101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.247222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.247250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.247368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.247395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.247523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.247551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.247671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.247698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.247835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.247861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.247949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.247976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.248062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.248088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.248233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.248261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.248376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-10-13 01:46:40.248402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-10-13 01:46:40.248501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.248528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.248642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.248667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.248749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.248780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.248923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.248954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.249034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.249060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.249199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.249224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.249341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.249367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.249487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.249515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.249644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.249683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.249807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.249835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.249957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.249986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.250099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.250126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.250238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.250265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.250377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.250404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.250532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.250560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.250704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.250730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.250825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.250852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.250944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.250970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.251078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.251117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.251245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.251286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.251375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.251403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.251537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.251565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.251682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.251709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.251796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.251824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.251972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.251999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.252114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.252140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.252268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.252298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.252419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.252448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.252562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.252589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.252708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.252734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.252883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.252911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.253025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.253051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.253174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.253200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-10-13 01:46:40.253288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-10-13 01:46:40.253315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.253404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.253431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.253538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.253567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.253681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.253707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.253806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.253832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.253948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.253975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.254092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.254121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.254207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.254234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.254325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.254352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.254437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.254479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.254592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.254623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.254745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.254784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.254915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.254943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.255035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.255062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.255179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.255206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.255295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.255321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.255401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.255429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.255582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.255611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.255703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.255730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.255819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.255846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.255956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.255983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.256127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.256155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.256277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.256316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.256420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.256448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.256563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.256590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.256675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.256702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.256794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.256821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.256961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.256987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.257079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.257108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.257230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.257257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.257373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.257401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.257510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.257536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.257616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.257642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.257734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.257772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.257854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.257880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.257987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.258027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.258111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.258139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.258253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.258281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.258365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.258391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.258507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.258534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-10-13 01:46:40.258630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-10-13 01:46:40.258657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.258745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.258775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.258859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.258885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.258975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.259004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.259121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.259148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.259233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.259261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 [2024-10-13 01:46:40.259260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.259379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.259407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.259541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.259571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.259696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.259726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.259843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.259870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.259961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.259990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.260119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.260146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.260293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.260319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.260437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.260479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.260576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.260603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.260744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.260776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.260865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.260892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.260985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.261013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.261101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.261129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.261240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.261279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.261378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.261408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.261515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.261555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.261664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.261691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.261779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.261811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.261933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.261959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.262051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.262079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.262175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.262204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.262296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.262323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.262445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.262477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.262569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.262596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.262698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.262738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.262864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.262893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.263013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.263042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.263128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.263155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.263280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.263306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.263452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.263485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.263574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.263603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.263711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.263740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.263839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.263865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.264003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.264030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.264138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-10-13 01:46:40.264164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-10-13 01:46:40.264259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.264285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.264403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.264430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.264529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.264558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.264666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.264706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.264812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.264840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.264939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.264967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.265078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.265104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.265266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.265307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.265397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.265424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.265517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.265550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.265636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.265663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.265804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.265831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.265907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.265934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.266078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.266106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.266224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.266251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.266362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.266390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.266489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.266517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.266606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.266633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.266716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.266743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.266883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.266910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.267105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.267131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.267218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.267245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.267372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.267398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.267530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.267571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.267664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.267692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.267811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.267837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.267918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.267946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.268032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.268058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.268171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.268197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.268282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.268308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.268408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.268438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.268543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.268571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.268689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.268716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.268846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.268873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.268998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.269026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.269122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.269152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.269282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.269309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.269393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.269419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.269521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.269548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.269630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-10-13 01:46:40.269656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-10-13 01:46:40.269743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.269769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.269903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.269930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.270014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.270040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.270159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.270185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.270296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.270322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.270409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.270438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.270550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.270577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.270668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.270695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.270799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.270826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.270937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.270964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.271084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.271110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.271229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.271256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.271341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.271366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.271484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.271511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.271599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.271626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.271735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.271775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.271935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.271975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.272097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.272124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.272239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.272264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.272374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.272400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.272531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.272557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.272671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.272697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.272817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.272843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.272927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.272953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.273047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.273072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.273153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.273179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.273298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.273324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.273443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.273485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.273570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.273597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.273717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.273745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.273834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.273861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.273986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.274025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.274154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.274194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.274287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.274314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.274401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.274428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-10-13 01:46:40.274518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-10-13 01:46:40.274545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.274629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.274655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.274739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.274765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.274919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.274945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.275034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.275060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.275143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.275169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.275285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.275315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.275432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.275459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.275581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.275610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.275701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.275730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.275815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.275843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.275956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.275984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.276073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.276101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.276223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.276251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.276396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.276422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.276568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.276595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.276712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.276739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.276816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.276843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.276960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.276989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.277083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.277110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.277222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.277249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.277360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.277387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.277505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.277534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.277650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.277677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.277796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.277823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.278019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.278047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.278135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.278164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.278278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.278306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.278441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.278498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.278633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.278673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.278796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.278825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.278946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.278973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.279090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.279117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.279205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.279232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.279322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.279351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.279441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.279468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.279602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.279631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.279828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.279856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.279995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.280022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.280150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.280177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.280324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-10-13 01:46:40.280352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-10-13 01:46:40.280452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.280510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.280649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.280679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.280832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.280861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.280981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.281010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.281107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.281135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.281254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.281282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.281363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.281390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.281494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.281522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.281606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.281634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.283029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.283062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.283163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.283192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.283853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.283891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.284026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.284053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.284198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.284227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.284380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.284408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.284516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.284544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.284654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.284695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.284790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.284819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.284942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.284971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.285088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.285115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.285234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.285261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.285373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.285400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.285493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.285522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.285667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.285695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.285789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.285816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.285896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.285924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.286045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.286074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.286230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.286276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.286373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.286401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.286519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.286560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.286654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.286684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.286803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.286831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.286943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.286971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.287055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.287084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.287228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.287258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.287349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.287378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.287494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.287523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.287642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.287671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.287758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.287787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.287876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-10-13 01:46:40.287904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-10-13 01:46:40.288018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.288047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.288180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.288210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.288356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.288385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.288507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.288537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.288626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.288654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.288850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.288878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.289021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.289049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.289138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.289166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.289285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.289313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.289447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.289504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.289658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.289687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.289784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.289812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.289933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.289961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.290043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.290070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.290171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.290207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.290304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.290333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.290450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.290492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.290579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.290606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.290699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.290727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.290845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.290874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.291001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.291030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.291152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.291180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.291296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.291324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.291408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.291435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.291586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.291628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.291743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.291792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.291885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.291914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.292035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.292063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.292159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.292188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.292281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.292308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.292424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.292452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.292611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.292638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.292721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.292747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.292860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.292886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.293035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.293061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.293152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.293182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.293276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.293304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.293399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.293430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.293534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.293562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.293661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.293687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-10-13 01:46:40.293775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-10-13 01:46:40.293802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.293949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.293983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.294083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.294112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.294255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.294296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.294424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.294486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.294610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.294640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.294756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.294795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.294888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.294917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.295042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.295071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.295199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.295229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.295350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.295378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.295501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.295530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.295647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.295677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.295770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.295799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.295954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.295987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.296102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.296130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.296246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.296274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.296390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.296420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.296529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.296559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.296643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.296671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.296798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.296837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.296949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.296987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.297104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.297131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.297246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.297273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.297377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.297407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.297504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.297533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.297654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.297683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.297805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.297833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.297922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.297950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.298066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.298094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.298198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.298227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.298332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.298373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.298523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.298553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.298650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.298678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.298767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.298795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.298890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.298918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-10-13 01:46:40.299039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-10-13 01:46:40.299068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.299192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.299220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.299335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.299362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.299452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.299484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.299579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.299605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.299701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.299740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.299865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.299893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.300017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.300046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.300164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.300191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.300310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.300338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.300463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.300496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.300588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.300615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.300734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.300760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.300845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.300873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.300990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.301018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.301105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.301132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.301224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.301252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.301367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.301396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.301501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.301533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.301626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.301655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.301749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.301777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.301868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.301896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.302019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.302048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.302167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.302194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.302328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.302356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.302483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.302511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.302624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.302652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.302739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.302766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.302884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.302914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.303016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.303043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.303130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.303159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.303250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.303278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.303377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.303418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.303558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.303589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.303731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.303760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.303849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.303877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.303996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.304025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.304170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.304197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.304304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.304333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.304417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.304444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.304550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-10-13 01:46:40.304577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-10-13 01:46:40.304690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.304719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.304847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.304874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.304957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.304983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.305100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.305126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.305227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.305258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.305394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.305435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.305572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.305600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.305718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.305744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.305883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.305909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.306025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.306052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.306151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.306177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.306291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.306318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.306431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.306457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.306591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.306631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.306779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.306808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.306921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.306947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.307063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.307090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.307208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.307236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.307341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.307382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.307480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.307508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.307605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.307632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.307730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.307757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.307845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.307873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.307960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.307987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.308071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.308098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.308182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.308212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.308296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.308326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.308411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.308437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.308540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.308567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.308647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.308675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.308761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.308793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.308872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.308905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.032 [2024-10-13 01:46:40.308905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.308936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:55.032 [2024-10-13 01:46:40.308951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:55.032 [2024-10-13 01:46:40.308963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:55.032 [2024-10-13 01:46:40.308974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:55.032 [2024-10-13 01:46:40.308984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.309011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.309123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.309149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.309257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.309296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.309394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.309422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.309521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.309549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.309635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.309661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.032 qpair failed and we were unable to recover it. 00:35:55.032 [2024-10-13 01:46:40.309755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.032 [2024-10-13 01:46:40.309783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.309908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.309936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.310049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.310076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.310271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.310298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.310414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.310443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.310571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.310524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:55.033 [2024-10-13 01:46:40.310599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.310582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:55.033 [2024-10-13 01:46:40.310696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.310606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:55.033 [2024-10-13 01:46:40.310609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:55.033 [2024-10-13 01:46:40.310724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.310808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.310833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.310926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.310953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.311049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.311078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.311166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.311192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.311284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.311310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.311405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.311433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.311549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.311577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.311673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.311714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.311812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.311842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.311942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.311972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.312068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.312095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.312179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.312208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.312313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.312354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.312443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.312481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.312576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.312604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.312691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.312718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.312813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.312841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.312940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.312968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.313086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.313114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.313200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.313227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.313312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.313338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.313447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.313483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.313568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.313595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.313685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.313716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.313800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.313826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.313919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.313945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.314035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.314062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.314141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.314169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.314270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.314310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.314405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.314434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.314525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.314554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.033 [2024-10-13 01:46:40.314641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.033 [2024-10-13 01:46:40.314669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.033 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.314758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.314787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.314870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.314898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.314994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.315021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.315104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.315131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.315225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.315253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.315379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.315406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.315489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.315518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.315609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.315637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.315732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.315759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.315847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.315877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.315975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.316002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.316093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.316121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.316215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.316242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.316327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.316354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.316450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.316501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.316596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.316626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.316721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.316749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.316849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.316875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.316971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.316999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.317085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.317114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.317203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.317231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.317318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.317345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.317427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.317466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.317572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.317600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.317686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.317714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.317806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.317833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.317920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.317948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.318036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.318063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.318156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.318183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.318296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.318323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.318419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.318446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.318559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.318604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.318729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.318770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.318855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.318882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.319002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.319029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.319125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.319153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.319250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.319291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.319383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.319412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.319547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.319575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.319667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.034 [2024-10-13 01:46:40.319694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.034 qpair failed and we were unable to recover it. 00:35:55.034 [2024-10-13 01:46:40.319817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.319845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.319926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.319953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.320036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.320064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.320187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.320216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.320317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.320348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.320442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.320479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.320574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.320602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.320688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.320714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.320855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.320885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.320970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.320997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.321077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.321104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.321185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.321212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.321380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.321408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.321525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.321553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.321629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.321656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.321781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.321809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.321923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.321950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.322082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.322110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.322227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.322255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.322351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.322379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.322459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.322493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.322581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.322607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.322705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.322732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.322822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.322849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.322932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.322960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.323084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.323111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.323228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.323254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.323343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.323370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.323481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.323522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.323614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.323642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.323724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.323762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.323843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.323875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.323988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.324017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.324108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.324135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.324219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.324245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.035 qpair failed and we were unable to recover it. 00:35:55.035 [2024-10-13 01:46:40.324325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.035 [2024-10-13 01:46:40.324352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.324436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.324479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.324579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.324607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.324688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.324715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.324808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.324845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.325038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.325066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.325184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.325211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.325308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.325336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.325448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.325499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.325589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.325617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.325708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.325735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.325818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.325845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.325923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.325950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.326030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.326057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.326190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.326230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.326368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.326408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.326529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.326558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.326640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.326667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.326749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.326776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.326858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.326884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.326998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.327024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.327100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.327127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.327230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.327269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.327373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.327407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.327536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.327567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.327653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.327680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.327760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.327788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.327873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.327899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.327984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.328012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.328094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.328121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.328214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.328241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.328349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.328375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.328486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.328517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.328614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.328643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.328728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.328756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.328841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.328871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.328972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.328999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.329096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.329125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.329220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.329248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.329336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.329363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.036 qpair failed and we were unable to recover it. 00:35:55.036 [2024-10-13 01:46:40.329476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.036 [2024-10-13 01:46:40.329504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.329591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.329618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.329701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.329728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.329814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.329840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.329922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.329951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.330039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.330066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.330185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.330212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.330336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.330363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.330453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.330502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.330624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.330653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.330743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.330770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.330856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.330882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.330963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.331000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.331099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.331127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.331208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.331235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.331322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.331349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.331467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.331514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.331598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.331624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.331709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.331736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.331842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.331868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.331982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.332010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.332097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.332125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.332210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.332237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.332320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.332353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.332439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.332481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.332567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.332593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.332687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.332713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.332831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.332858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.332959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.332995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.333079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.333106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.333229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.333256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.333344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.333372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.333455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.333498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.333587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.333616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.333702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.333729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.333849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.333876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.333962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.333990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.334074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.334101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.334228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.334268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.334364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.334392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.334489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.037 [2024-10-13 01:46:40.334519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.037 qpair failed and we were unable to recover it. 00:35:55.037 [2024-10-13 01:46:40.334634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.334661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.334749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.334781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.334891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.334918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.335049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.335076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.335188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.335215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.335320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.335360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.335458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.335493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.335587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.335615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.335694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.335721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.335817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.335844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.335984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.336011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.336107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.336135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.336241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.336281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.336381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.336410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.336550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.336578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.336662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.336689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.336791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.336817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.336899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.336925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.337006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.337033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.337113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.337140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.337239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.337281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.337374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.337402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.337492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.337521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.337618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.337648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.337736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.337774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.337888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.337915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.338045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.338072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.338160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.338188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.338290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.338330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.338483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.338512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.338598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.338626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.338750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.338780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.338869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.338896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.338985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.339011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.339135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.339163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.339250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.339276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.339364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.339397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.339536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.339563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.339639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.339665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.339743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.339774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.038 qpair failed and we were unable to recover it. 00:35:55.038 [2024-10-13 01:46:40.339857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.038 [2024-10-13 01:46:40.339883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.339962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.339999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.340078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.340107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.340218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.340255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.340358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.340388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.340483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.340512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.340604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.340630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.340749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.340778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.340873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.340901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.341037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.341071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.341168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.341196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.341285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.341314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.341431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.341490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.341583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.341610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.341722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.341749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.341846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.341874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.341997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.342026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.342122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.342151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.342236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.342263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.342371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.342398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.342491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.342518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.342599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.342625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.342719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.342746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.342854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.342881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.343011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.343051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.343145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.343174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.343264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.343291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.343372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.343400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.343507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.343535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.343618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.343645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.343755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.343785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.343881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.343908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.344048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.344089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.344189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.344218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.344306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.039 [2024-10-13 01:46:40.344333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.039 qpair failed and we were unable to recover it. 00:35:55.039 [2024-10-13 01:46:40.344444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.344482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.344604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.344634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.344747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.344797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.344892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.344921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.345041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.345068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.345161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.345188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.345270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.345297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.345377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.345405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.345495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.345523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.345615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.345643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.345729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.345765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.345878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.345906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.345989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.346017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.346103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.346130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.346228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.346263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.346358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.346386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.346482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.346511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.346619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.346646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.346765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.346791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.346873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.346900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.346992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.347020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.347130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.347169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.347265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.347292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.347391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.347418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.347499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.347527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.347616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.347643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.347730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.347766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.347851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.347878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.347970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.347997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.348111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.348138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.348217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.348242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.348334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.348375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.348464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.348500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.348600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.348627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.348710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.348736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.348841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.348867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.348946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.348972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.349053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.349080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.349157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.349183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.349296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.040 [2024-10-13 01:46:40.349323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.040 qpair failed and we were unable to recover it. 00:35:55.040 [2024-10-13 01:46:40.349444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.349488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.349628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.349675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.349767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.349796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.349885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.349912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.349994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.350021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.350140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.350169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.350255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.350283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.350382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.350411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.350528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.350555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.350640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.350667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.350745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.350772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.350881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.350907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.351011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.351037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.351118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.351144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.351260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.351287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.351374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.351400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.351494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.351522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.351605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.351631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.351716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.351742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.351831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.351857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.351938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.351965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.352041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.352067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.352155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.352183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.352277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.352306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.352437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.352481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.352571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.352598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.352678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.352705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.352805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.352833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.352933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.352960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.353096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.353124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.353212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.353239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.353352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.353379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.353455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.353497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.353589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.353616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.353705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.353731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.353820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.353848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.353948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.353988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.354084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.354114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.354203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.041 [2024-10-13 01:46:40.354231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.041 qpair failed and we were unable to recover it. 00:35:55.041 [2024-10-13 01:46:40.354348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.354376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.354462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.354501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.354637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.354665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.354787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.354814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.354905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.354931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.355043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.355077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.355162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.355190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.355283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.355312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.355412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.355440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.355549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.355577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.355661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.355688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.355780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.355806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.355886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.355912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.355993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.356021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.356101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.356130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.356221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.356248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.356363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.356391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.356489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.356516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.356593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.356619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.356706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.356733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.356825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.356859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.356940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.356969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.357055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.357083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.357181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.357209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.357406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.357432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.357576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.357605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.357716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.357743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.357840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.357868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.357958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.357986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.358093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.358125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.358223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.358252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.358339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.358368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.358454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.358501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.358596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.358623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.358713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.358741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.358849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.358877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.358969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.358996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.359121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.359149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.359227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.359254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.359337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.042 [2024-10-13 01:46:40.359365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.042 qpair failed and we were unable to recover it. 00:35:55.042 [2024-10-13 01:46:40.359485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.359513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.359601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.359629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.359710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.359737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.359898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.359926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.360011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.360039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.360141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.360170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.360256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.360284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.360365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.360391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.360490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.360525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.360628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.360668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.360773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.360802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.360896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.360923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.361038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.361066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.361160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.361200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.361295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.361323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.361421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.361449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.361554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.361582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.361672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.361701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.361838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.361864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.361951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.361978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.362085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.362111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.362201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.362228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.362315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.362344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.362435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.362485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.362572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.362609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.362721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.362758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.362953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.362990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.363099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.363132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.363217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.363245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.363347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.363386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.363491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.363521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.363630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.363657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.363754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.363786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.363871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.363898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.364017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.364044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.364127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.043 [2024-10-13 01:46:40.364156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.043 qpair failed and we were unable to recover it. 00:35:55.043 [2024-10-13 01:46:40.364237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.364263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.364384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.364410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.364507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.364535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.364619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.364645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.364734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.364760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.364872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.364899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.364976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.365004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.365106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.365139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.365228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.365255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.365368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.365396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.365491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.365519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.365609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.365637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.365719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.365746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.365868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.365896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.365987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.366016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.366102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.366129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.366206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.366233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.366317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.366342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.366432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.366468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.366555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.366581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.366664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.366699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.366794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.366820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.366911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.366936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.367019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.367047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.367130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.367157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.367248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.367286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.367418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.367446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.367554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.367581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.367727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.367754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.367839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.367865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.367958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.367984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.368076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.368103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.368190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.368215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.368291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.368316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.368404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.368432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.368553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.368584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.368672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.368700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.368826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.368863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.368979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.369015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.369154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.369191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.044 qpair failed and we were unable to recover it. 00:35:55.044 [2024-10-13 01:46:40.369293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.044 [2024-10-13 01:46:40.369322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.369406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.369432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.369561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.369601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.369702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.369740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.369868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.369909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.370009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.370038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.370129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.370155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.370263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.370305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.370408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.370436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.370534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.370565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.370679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.370707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.370801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.370841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.370937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.370963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.371047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.371076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.371160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.371187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.371268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.371294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.371374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.371401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.371497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.371525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.371621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.371647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.371727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.371765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.371852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.371878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.372019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.372047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.372143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.372169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.372258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.372285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.372373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.372400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.372497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.372525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.372604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.372629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.372717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.372742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.372840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.372865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.372944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.372970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.373059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.373085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.373175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.373201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.373293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.373320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.373412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.373443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.373559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.373586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.373680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.373707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.373829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.373856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.373935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.373964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.374057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.374084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.374169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.374197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.045 [2024-10-13 01:46:40.374290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.045 [2024-10-13 01:46:40.374317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.045 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.374409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.374443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.374555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.374584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.374667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.374693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.374788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.374823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.374912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.374942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.375029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.375055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.375143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.375170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.375261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.375288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.375378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.375404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.375500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.375527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.375616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.375644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.375725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.375752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.375849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.375876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.375971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.375997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.376088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.376114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.376207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.376236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.376334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.376361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.376449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.376534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.376628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.376655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.376750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.376780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.376876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.376903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.377015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.377042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.377130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.377157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.377247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.377274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.377376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.377404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.377491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.377518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.377610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.377636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.377732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.377767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.377853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.377881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.377969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.377996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.378094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.378121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.378209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.378235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.378317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.378343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.378425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.378466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.378557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.378583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.378664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.378693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.378796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.378836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.378927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.378954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.379044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.379071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.379153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.046 [2024-10-13 01:46:40.379179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.046 qpair failed and we were unable to recover it. 00:35:55.046 [2024-10-13 01:46:40.379295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.379321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.379403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.379429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.379556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.379583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.379661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.379688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.379777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.379804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.379942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.379971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.380097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.380123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.380234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.380283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.380405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.380432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.380544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.380584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.380684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.380713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.380811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.380839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.380932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.380960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.381048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.381076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.381159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.381185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.381274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.381301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.381391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.381417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.381515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.381542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.381635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.381662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.381743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.381778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.381871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.381902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.381990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.382018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.382101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.382127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.382224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.382259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.382350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.382378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.382487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.382524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.382641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.382679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.382798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.382834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.382948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.382984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.383080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.383109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.383195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.383221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.383317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.383344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.383429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.383467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.383557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.383583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.383680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.383706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.383844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.383870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.383956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.383984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.384086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.384112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.384227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.384255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.384340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.384367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.047 [2024-10-13 01:46:40.384444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.047 [2024-10-13 01:46:40.384477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.047 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.384569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.384595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.384680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.384706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.384803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.384829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.384918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.384947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.385042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.385070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.385155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.385182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.385305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.385338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.385423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.385450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.385546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.385574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.385671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.385707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.385804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.385831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.385916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.385943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.386033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.386061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.386152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.386179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.386277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.386317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.386412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.386441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.386562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.386600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.386708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.386745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.386844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.386871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.386953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.386980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.387076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.387104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.387193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.387219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.387345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.387371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.387447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.387490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.387584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.387610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.387695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.387720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.387806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.387841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.387930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.387956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.388097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.388124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.388226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.388254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.388342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.388368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.388454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.388489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.388584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.388611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.388693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.388719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.388812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.388840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.388955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.048 [2024-10-13 01:46:40.388982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.048 qpair failed and we were unable to recover it. 00:35:55.048 [2024-10-13 01:46:40.389085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.389115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.389200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.389226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.389310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.389339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.389438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.389493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.389611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.389650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.389766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.389795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.389885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.389911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.390002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.390028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.390103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.390129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.390245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.390273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.390359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.390390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.390487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.390515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.390601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.390626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.390708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.390735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.390814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.390840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.390960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.390989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.391089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.391117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.391205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.391231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.391328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.391355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.391435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.391481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.391571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.391597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.391680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.391706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.391798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.391826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.391916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.391943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.392034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.392061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.392151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.392177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.392255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.392281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.392402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.392429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.392530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.392556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.392646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.392672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.392753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.392786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.392893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.392920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.393014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.393039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.393123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.393151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.393236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.393261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.393349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.393376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.393462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.393496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.393582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.393616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.393703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.393730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.393821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.393847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.049 qpair failed and we were unable to recover it. 00:35:55.049 [2024-10-13 01:46:40.393937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.049 [2024-10-13 01:46:40.393964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.394048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.394075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.394188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.394214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.394295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.394324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.394410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.394436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.394534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.394562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.394647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.394672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.394753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.394779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.394868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.394894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.394982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.395008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.395088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.395114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.395199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.395225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.395309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.395336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.395421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.395447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.395562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.395590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.395678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.395704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.395847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.395875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.395963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.395989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.396076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.396103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.396192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.396218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.396305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.396332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.396418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.396444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.396557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.396583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.396671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.396697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.396789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.396819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.396910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.396936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.397025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.397053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.397171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.397201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.397287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.397314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.397406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.397432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.397545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.397585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.397677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.397710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.397817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.397844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.397936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.397961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.398048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.398074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.398161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.398187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.398304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.398332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.398444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.398492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.398619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.398658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.398755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.398782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.050 [2024-10-13 01:46:40.398895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.050 [2024-10-13 01:46:40.398920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.050 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.399000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.399027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.399116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.399142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.399229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.399260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.399381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.399429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.399593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.399624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.399710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.399738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.399821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.399858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.399947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.399975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.400090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.400118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.400208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.400244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.400359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.400396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.400513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.400540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.400654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.400681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.400778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.400804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.400888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.400914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.401005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.401031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.401116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.401141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.401256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.401281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.401396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.401422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.401567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.401593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.401685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.401710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.401796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.401821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.401909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.401934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.402015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.402045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.402160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.402185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.402306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.402333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.402423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.402448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.402535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.402562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.402657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.402693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.402788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.402825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.402965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.403011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.403110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.403138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.403253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.403280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.403368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.403394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.403493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.403520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.403605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.403634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.403719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.403748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.403854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.403880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.403961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.403988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.404100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.051 [2024-10-13 01:46:40.404127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.051 qpair failed and we were unable to recover it. 00:35:55.051 [2024-10-13 01:46:40.404216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.404243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.404365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.404391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.404483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.404510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.404592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.404618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.404700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.404727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.404845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.404871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.404957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.404983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.405060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.405087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.405170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.405195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.405305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.405330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.405424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.405484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.405576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.405605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.405720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.405749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.405868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.405907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.406008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.406045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.406150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.406178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.406263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.406290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.406378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.406412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.406522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.406551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.406639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.406671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.406787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.406824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.406931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.406966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.407065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.407093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.407206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.407233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.407329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.407355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.407441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.407483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.407565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.407591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.407729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.407755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.407848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.407874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.407960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.407986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.408097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.408123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.408216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.408246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.408338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.408365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.408450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.408499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.408589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.408616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.052 [2024-10-13 01:46:40.408698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.052 [2024-10-13 01:46:40.408724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.052 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.408813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.408839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.408930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.408957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.409036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.409062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.409144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.409170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.409279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.409310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.409388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.409412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.409543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.409569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.409660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.409691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.409807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.409834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.409925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.409965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.410054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.410080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.410159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.410185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.410288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.410313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.410396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.410422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.410520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.410551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.410639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.410667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.410745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.410779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.410871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.410897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.410986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.411013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.411100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.411126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.411209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.411235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.411324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.411350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.411438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.411480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.411563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.411589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.411669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.411695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.411786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.411813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.411909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.411934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.412019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.412045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.412141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.412167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.412240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.412266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.412359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.412400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.412498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.412526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.412613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.412639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.412718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.412744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.412830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.412856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.412939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.412965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.413046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.413073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.413178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.413219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.413312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.413341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.053 [2024-10-13 01:46:40.413421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.053 [2024-10-13 01:46:40.413448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.053 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.413545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.413572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.413698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.413731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.413842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.413870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.413961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.413986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.414067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.414093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.414179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.414205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.414292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.414318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.414411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.414439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.414539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.414568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.414658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.414685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.414775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.414802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.414895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.414921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.414999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.415026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.415107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.415134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.415213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.415239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.415337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.415364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.415460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.415498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.415580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.415606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.415701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.415741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.415859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.415887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.415973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.416000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.416083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.416110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.416197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.416224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.416312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.416338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.416423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.416452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.416559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.416586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.416672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.416699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.416794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.416821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.416932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.416963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.417045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.417075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.417189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.417219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.417312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.417338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.417421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.417447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.417542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.417569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.417652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.417678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.417766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.417799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.417886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.417912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.417991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.418017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.418104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.418133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.418223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.054 [2024-10-13 01:46:40.418253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.054 qpair failed and we were unable to recover it. 00:35:55.054 [2024-10-13 01:46:40.418335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.418360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.418446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.418484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.418602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.418629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.418715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.418743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.418846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.418874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.418965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.418992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.419070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.419096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.419181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.419208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.419295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.419322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.419412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.419439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.419539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.419565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.419650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.419677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.419761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.419787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.419874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.419901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.419984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.420009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.420151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.420200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.420307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.420334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.420425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.420452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.420543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.420571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.420657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.420684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.420786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.420815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.420902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.420930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.421062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.421102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.421201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.421228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.421320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.421349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.421431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.421469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.421562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.421588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.421672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.421700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.421792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.421824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.421905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.421931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.422012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.422037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.422117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.422142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.422234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.422266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.422353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.422381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.422468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.422508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.422607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.422635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.422714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.422741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.422876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.422903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.422986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.423013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.423094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.055 [2024-10-13 01:46:40.423121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.055 qpair failed and we were unable to recover it. 00:35:55.055 [2024-10-13 01:46:40.423204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.423233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.423324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.423350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.423444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.423480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.423569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.423596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.423690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.423717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.423799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.423825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.423912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.423941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.424022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.424050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.424131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.424158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.424278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.424305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.424387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.424415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.424510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.424537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.424620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.424647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.424739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.424777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.424866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.424892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.424978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.425009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.425097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.425122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.425202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.425228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.425312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.425337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.425420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.425448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.425553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.425583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.425670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.425698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.425794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.425821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.425940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.425967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.426050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.426076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.426159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.426186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.426270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.426297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.426409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.426435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.426538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.426566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.426654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.426679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.426784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.426811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.426931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.426957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.427068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.427094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.427183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.427211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.427298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.427326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.427420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.427446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.056 qpair failed and we were unable to recover it. 00:35:55.056 [2024-10-13 01:46:40.427543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.056 [2024-10-13 01:46:40.427569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.427653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.427679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.427775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.427801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.427919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.427944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.428023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.428052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.428130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.428157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.428239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.428267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.428359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.428385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.428493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.428521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.428603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.428630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.428721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.428747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.428846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.428872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.428953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.428979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.429064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.429091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.429176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.429202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.429293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.429321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.429406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.429433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.429529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.429556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.429639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.429667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.429754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.429787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.429914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.429947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.430031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.430058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.430153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.430181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.430272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.430299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.430388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.430413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.430506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.430534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.430613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.430639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.430720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.430746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.430829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.430855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.430939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.430966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.431053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.431082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.431182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.431221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.431558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.431606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.431711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.431738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.431839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.431865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.431971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.431998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.432076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.432100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.432178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.432204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 [2024-10-13 01:46:40.432298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.432325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:55.057 [2024-10-13 01:46:40.432412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.057 [2024-10-13 01:46:40.432440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.057 qpair failed and we were unable to recover it. 00:35:55.057 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:35:55.057 [2024-10-13 01:46:40.432550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.432589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.432673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.432703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:55.058 [2024-10-13 01:46:40.432790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.432816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.432893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.432918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:55.058 [2024-10-13 01:46:40.433028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.433066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.058 [2024-10-13 01:46:40.433166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.433195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.433285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.433311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.433397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.433424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.433542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.433568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.433663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.433688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.433772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.433799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.433882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.433910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.433995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.434024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.434120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.434148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.434239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.434274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.434373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.434410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.434557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.434594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.434703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.434739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.434843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.434870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.434950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.434976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.435065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.435091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.435173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.435199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.435279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.435309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.435400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.435426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.435516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.435545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.435631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.435657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.435743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.435772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.435858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.435883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.435971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.435995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.436091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.436116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.436202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.436227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.436312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.436344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.436428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.436454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.436551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.436577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.436657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.436683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.436779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.436804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.436909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.436935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.437033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.437061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.058 [2024-10-13 01:46:40.437164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.058 [2024-10-13 01:46:40.437204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.058 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.437328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.437357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.437450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.437497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.437580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.437608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.437696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.437723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.437810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.437837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.437957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.437984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.438076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.438103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.438195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.438222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.438305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.438335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.438444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.438487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.438571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.438599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.438677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.438704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.438798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.438825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.438907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.438936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.439017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.439043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.439155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.439184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.439269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.439299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.439398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.439427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.439520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.439549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.439638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.439665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.439740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.439780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.439864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.439890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.439982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.440011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.440108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.440135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.440220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.440246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.440343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.440370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.440458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.440492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.440580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.440607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.440690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.440716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.440811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.440837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.440919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.440945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.441023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.441051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.441135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.441168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.441259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.441286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.441362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.441389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.441489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.441516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.441602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.441627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.441713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.441740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.441845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.441874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.441957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.441983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.442073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.442099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.442189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.442216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.059 [2024-10-13 01:46:40.442292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.059 [2024-10-13 01:46:40.442318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.059 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.442413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.442439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.442547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.442574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.442664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.442690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.442787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.442813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.442906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.442943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.443028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.443055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.443145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.443174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.443259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.443286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.443373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.443400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.443496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.443524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.443612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.443638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.443726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.443753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.443853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.443880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.443969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.443996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.444087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.444114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.444195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.444222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.444323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.444351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.444438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.444481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.444565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.444593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.444680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.444706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.444800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.444825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.444925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.444951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.445038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.445065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.445155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.445181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.445271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.445298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.445394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.445422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.445525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.445553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.445638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.445666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.445748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.445783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.445858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.445883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.445974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.445999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.446082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.446110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.446188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.446215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.446300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.446328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.446416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.446444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.446546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.446573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.446657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.446685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.446770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.446797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.446885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.446914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.447007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.447034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.447116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.447143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.060 qpair failed and we were unable to recover it. 00:35:55.060 [2024-10-13 01:46:40.447236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.060 [2024-10-13 01:46:40.447272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.447376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.447403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.447510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.447539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.447626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.447651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.447732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.447757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.447849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.447874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.447987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.448012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.448104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.448134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.448218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.448245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.448329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.448355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.448446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.448478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.448561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.448587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.448662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.448688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.448772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.448798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.448879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.448905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.448990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.449020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.449101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.449127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.449205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.449232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.449324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.449353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.449444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.449483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.449567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.449594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.449678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.449705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.449800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.449838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.449919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.449948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.450034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.450061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.450150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.450184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.450274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.450302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.450387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.450421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.061 qpair failed and we were unable to recover it. 00:35:55.061 [2024-10-13 01:46:40.450518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.061 [2024-10-13 01:46:40.450546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.450668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.450696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.450779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.450804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.450889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.450914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.451005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.451031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.451113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:55.062 [2024-10-13 01:46:40.451139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.451233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.451273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:55.062 [2024-10-13 01:46:40.451365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.451399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.451495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.451523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.062 [2024-10-13 01:46:40.451658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.451686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.062 [2024-10-13 01:46:40.451775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.451801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.451901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.451928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.452018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.452047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.452134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.452165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.452254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.452281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.452366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.452394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.452485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.452513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.452596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.452622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.452700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.452726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.452808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.452835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.452943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.452968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.453052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.453077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.453152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.453178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.453304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.453352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.453460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.453495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.453586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.453618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.453703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.453730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.453823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.453855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.453938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.453966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.454055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.454082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.454169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.454195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.454280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.454308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.454397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.454425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.454529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.454557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.454650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.454678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.454766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.454795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.454873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.062 [2024-10-13 01:46:40.454899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.062 qpair failed and we were unable to recover it. 00:35:55.062 [2024-10-13 01:46:40.454982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.455009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.455109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.455138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.455231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.455258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.455340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.455367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.455450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.455497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.455580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.455608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.455697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.455725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.455824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.455852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.455933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.455960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.456045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.456073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.456160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.456188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.456273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.456300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.456391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.456419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.456511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.456539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.456639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.456666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.456802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.456839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.456956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.456982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.457066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.457093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.457176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.457203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.457310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.457337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.457433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.457459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.457555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.457582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.457667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.457694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.457806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.457835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.457973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.457999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.458090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.458117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.458204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.458231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.458319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.458348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.458448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.458501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.458596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.458623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.458704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.458731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.458815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.458841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.458927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.458953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.459038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.459067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.459151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.459180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.459271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.459297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.459414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.459441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.459543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.459570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.459655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.459682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.459776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.459803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.063 qpair failed and we were unable to recover it. 00:35:55.063 [2024-10-13 01:46:40.459926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.063 [2024-10-13 01:46:40.459953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.460037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.460064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.460155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.460181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.460271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.460300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.460381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.460409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.460503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.460531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.460612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.460639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.460727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.460754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.460839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.460872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.460958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.460985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.461076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.461102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.461190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.461217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.461300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.461327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.461431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.461489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.461585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.461613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.461707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.461736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.461843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.461869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.461954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.461981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.462067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.462093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.462169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.462194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.462303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.462329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.462413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.462438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.462531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.462561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.462642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.462669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.462755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.462792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.462874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.462902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.462998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.463030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.463127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.463165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.463264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.463297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.463383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.463409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.463504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.463531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.463614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.463640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.463727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.463753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.463853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.463879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.463965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.463992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.464108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.464133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.464213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.464239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.464321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.464346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.464427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.464455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.464553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.064 [2024-10-13 01:46:40.464579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.064 qpair failed and we were unable to recover it. 00:35:55.064 [2024-10-13 01:46:40.464659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.464686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.464768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.464795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.464907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.464946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.465041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.465069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.465158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.465185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.465269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.465296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.465386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.465412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.465513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.465541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.465626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.465652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.465734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.465761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.465887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.465915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.466004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.466029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.466112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.466138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.466222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.466247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.466321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.466348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.466454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.466508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.466592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.466618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.466713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.466743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.466843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.466871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.466958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.466986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.467096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.467122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.467207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.467234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.467315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.467340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.467423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.467449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.467546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.467572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.467653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.467681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.467767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.467793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.467871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.467896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.468007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.468036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.468150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.468178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.468266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.468292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.468370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.468397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.468488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.468514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.468604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.468630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.468712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.468740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.468821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.468856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.468934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.065 [2024-10-13 01:46:40.468960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.065 qpair failed and we were unable to recover it. 00:35:55.065 [2024-10-13 01:46:40.469055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.469083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.469171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.469199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.469293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.469320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.469400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.469427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.469531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.469559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.469649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.469677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.469764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.469793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.469872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.469898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.470008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.470034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.470129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.470157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.470271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.470297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.470377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.470403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.470515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.470543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.470626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.470653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.470738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.470773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.470893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.470921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.471018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.471057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.471149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.471175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.471266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.471293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.471380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.471406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.471501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.471528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.471608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.471633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.471716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.471741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.471838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.471864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.471945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.471973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.472057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.472086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.472176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.472203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.472324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.472351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.472434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.472460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.472553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.472580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.472660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.472687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.472776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.472803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.472895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.472922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.473000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.473026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.473105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.473132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.473221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.473261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.473382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.473408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.473528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.473554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.473630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.473656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.473740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.473775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.473858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.473884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.473969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.473996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.066 [2024-10-13 01:46:40.474104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.066 [2024-10-13 01:46:40.474129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.066 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.474213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.474239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.474329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.474357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.474485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.474520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.474606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.474632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.474714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.474741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.474863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.474890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.474991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.475016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.475098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.475125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.475213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.475239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.475322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.475350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.475468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.475501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.475593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.475619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.475707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.475732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.475832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.475857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.475938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.475964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.476043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.476069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.476181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.476207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.476303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.476338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.476434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.476487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.476613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.476642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.476740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.476774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.476857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.476883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.476962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.476987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.477064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.477091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.477179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.477206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.477309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.477350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.477443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.477488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.477577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.477604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.477696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.477721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.477811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.477838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.477925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.477950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.478033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.478058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.478144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.478171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.478260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.478285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.478408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.478436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.478544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.478576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.478668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.478696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.478790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.478817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.478899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.478925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.479017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.067 [2024-10-13 01:46:40.479043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.067 qpair failed and we were unable to recover it. 00:35:55.067 [2024-10-13 01:46:40.479128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.479156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.479236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.479262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.479348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.479378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.479477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.479504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.479591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.479617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.479698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.479724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.479806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.479833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.479922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.479949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.480036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.480065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.480153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.480180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.480274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.480303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.480430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.480467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.480564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.480590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.480668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.480694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.480803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.480842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.480938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.480965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.481057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.481087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.481174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.481200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.481283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.481309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.481386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.481412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.481521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.481548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.481627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.481653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.481738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.481776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.481859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.481885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.481976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.482004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.482087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.482113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.482227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.482254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.482336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.482362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.482448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.482484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.482575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.482607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.482714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.482741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.482846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.482875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.482955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.482981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.483067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.483094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.483184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.483210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.483304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.483330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.483413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.483442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.483544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.068 [2024-10-13 01:46:40.483583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.068 qpair failed and we were unable to recover it. 00:35:55.068 [2024-10-13 01:46:40.483701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.483729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.483848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.483874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.483962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.483987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.484060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.484086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.484165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.484192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.484315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.484342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.484437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.484477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.484561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.484587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.484698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.484724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.484820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.484858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.484977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.485003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.485096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.485124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.485217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.485243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.485326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.485354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.485443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.485481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.485574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.485601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.485683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.485709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.485801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.485826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.485940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.485970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.486062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.486088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.486174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.486201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.486283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.486309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.486397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.486424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.486518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.486546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 Malloc0 00:35:55.069 [2024-10-13 01:46:40.486637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.486664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.486742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.486772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.486856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.486883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.486998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.487026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.069 [2024-10-13 01:46:40.487121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.487160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:55.069 [2024-10-13 01:46:40.487247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.487274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.069 [2024-10-13 01:46:40.487355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.487386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.069 [2024-10-13 01:46:40.487503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.487530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.069 qpair failed and we were unable to recover it. 00:35:55.069 [2024-10-13 01:46:40.487621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.069 [2024-10-13 01:46:40.487647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.487729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.487757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.487863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.487890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.487976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.488003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.488097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.488123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.488218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.488245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.488325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.488351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.488445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.488487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.488577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.488603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.488689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.488716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.488805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.488832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.488921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.488953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.489034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.489060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.489153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.489180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.489270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.489296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.489377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.489403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.489516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.489543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.489629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.489655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.489753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.489792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.489891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.489920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.490016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.490043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.490124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.490151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.490232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.490257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.490335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.490363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 [2024-10-13 01:46:40.490353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.490456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.490582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.490678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.490706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.490803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.490829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.490946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.490973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.491057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.491083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.491169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.491196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.491291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.491319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.491410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.491437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.491529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.491558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.491643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.491669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.491750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.491784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.491885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.491913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.491993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.492018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.492102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.492128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.492245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.492271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.492351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.492376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.492464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.492496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.492582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.492607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.070 [2024-10-13 01:46:40.492687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.070 [2024-10-13 01:46:40.492712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.070 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.492808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.492835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.492931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.492958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.493048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.493075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.493164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.493191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.493272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.493298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.493383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.493409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.493489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.493516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.493597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.493623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.493711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.493738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.493849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.493876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.493956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.493981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.494071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.494097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.494174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.494203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.494301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.494329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.494416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.494441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.494537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.494564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.494647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.494673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.494784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.494810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.494891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.494916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.495010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.495039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.495133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.495160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.495249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.495276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.495369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.495395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.495487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.495516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.495600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.495627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.495742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.495769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.495860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.495885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.495972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.495999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.496080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.496105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.496199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.496228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.496349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.496375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.496466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.496499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.496588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.496614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.496696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.496722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.496810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.496839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.496925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.496952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.497033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.497058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.497135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.497161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.497270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.497296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.071 qpair failed and we were unable to recover it. 00:35:55.071 [2024-10-13 01:46:40.497377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.071 [2024-10-13 01:46:40.497402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.497491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.497519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.497607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.497634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.497716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.497742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.497832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.497859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.497946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.497975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.498063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.498090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.498177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.498204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.498284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.498310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.498398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.498429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.498519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.498546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.072 [2024-10-13 01:46:40.498628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.498654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:55.072 [2024-10-13 01:46:40.498774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.498801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.072 [2024-10-13 01:46:40.498883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.498908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.072 [2024-10-13 01:46:40.499033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.499074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.499166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.499194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.499291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.499319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.499401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.499428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.499529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.499555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.499638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.499666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.499751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.499776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.499859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.499885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.499964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.499993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.500082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.500109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.500194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.500222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.500303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.500329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.500418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.500443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.500552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.500578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.500664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.500690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.500778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.500806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.500891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.500917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.501036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.501064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.501146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.501171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.501255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.501281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.501366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.501397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.501512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.501538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.501627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.072 [2024-10-13 01:46:40.501653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.072 qpair failed and we were unable to recover it. 00:35:55.072 [2024-10-13 01:46:40.501735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.501761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.501842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.501867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.501952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.501977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.502054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.502079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.502160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.502187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.502272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.502297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.502386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.502412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.502515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.502544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.502629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.502656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.502739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.502765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.502849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.502875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.502967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.502996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.503083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.503108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.503189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.503216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.503308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.503333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.503414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.503440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.503533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.503559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.503645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.503672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.503755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.503781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.503862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.503887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.503965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.503991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.504080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.504109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.504196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.504225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.504310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.504339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.504424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.504454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.504551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.504578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.504659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.504685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.504763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.504789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.504881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.504906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.504992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.505018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.505109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.505136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.505220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.505249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.505332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.505358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.505446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.505481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.505566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.505592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.505669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.505695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.505785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.505815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.073 qpair failed and we were unable to recover it. 00:35:55.073 [2024-10-13 01:46:40.505902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.073 [2024-10-13 01:46:40.505929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.506018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.506044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.506133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.506159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.506238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.506263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.506342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.506367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.506447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.506486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.074 [2024-10-13 01:46:40.506601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.506627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.506715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:55.074 [2024-10-13 01:46:40.506740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.506829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.074 [2024-10-13 01:46:40.506854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.506981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.074 [2024-10-13 01:46:40.507009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.507092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.507118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.507204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.507233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.507311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.507343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.507437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.507464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.507553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.507578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.507664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.507691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.507774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.507799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.507880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.507910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.507988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.508015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.508098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.508123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.508208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.508234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.508321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.508346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.508435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.508459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.508548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.508573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.508656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.508682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.508765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.508792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.508881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.508908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.508994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.509022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.509134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.509160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.509244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.509272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.509354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.509380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.509461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.509495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.509581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.509607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.509684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.074 [2024-10-13 01:46:40.509710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.074 qpair failed and we were unable to recover it. 00:35:55.074 [2024-10-13 01:46:40.509821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.509847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.509943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.509968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.510053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.510081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.510165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.510190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.510278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.510306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.510398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.510425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.510528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.510555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.510635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.510661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.510743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.510769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.510856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.510882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.510996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.511022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.511114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.511140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.511223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.511248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.511325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.511352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.511441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.511466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.511561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.511587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.511680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.511709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.511798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.511824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.511916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.511948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.512062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.512088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.512172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.512200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.512295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.512322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.512402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.512428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.512518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.512545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.512627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.512653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.512734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.512759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.512842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.512869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.512956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.512983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.513076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.513103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.513221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.513247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.513330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.513358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.513447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.513478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.513580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.513611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.513696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.513722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.513806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.513833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.513916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.513942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.514030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.514055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.514135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.514160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.514246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.514274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.514358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.514383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.514479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.514506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.514595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.075 [2024-10-13 01:46:40.514622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.075 [2024-10-13 01:46:40.514711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.075 [2024-10-13 01:46:40.514737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.075 qpair failed and we were unable to recover it. 00:35:55.076 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.076 [2024-10-13 01:46:40.514822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.514849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.076 [2024-10-13 01:46:40.514943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.514971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.515059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.076 [2024-10-13 01:46:40.515086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.515169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.515196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.515288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.515313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.515394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.515420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.515581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.515622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.515708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.515736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.515829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.515858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.515954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.515980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.516057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.516085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.516175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.516201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.516287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.516312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.516390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.516415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.516528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.516576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4834000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.516681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.516708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.516789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.516815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.516890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.516916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.516995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.517021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f483c000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.517111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.517142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4830000b90 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.517228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.517256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.517350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.517377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.517462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.517498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.517595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.517621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.517705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.517730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.517812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.517837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.517919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.517944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.518036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.518066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.518163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.518188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.518272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.518298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.518412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.076 [2024-10-13 01:46:40.518438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44b60 with addr=10.0.0.2, port=4420 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 [2024-10-13 01:46:40.518878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.076 [2024-10-13 01:46:40.521191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.076 [2024-10-13 01:46:40.521311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.076 [2024-10-13 01:46:40.521339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.076 [2024-10-13 01:46:40.521355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.076 [2024-10-13 01:46:40.521368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.076 [2024-10-13 01:46:40.521401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.076 qpair failed and we were unable to recover it. 00:35:55.076 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.076 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:55.076 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.076 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.076 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.076 01:46:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1763330 00:35:55.076 [2024-10-13 01:46:40.530993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.076 [2024-10-13 01:46:40.531084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.076 [2024-10-13 01:46:40.531110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.077 [2024-10-13 01:46:40.531124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.077 [2024-10-13 01:46:40.531137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.077 [2024-10-13 01:46:40.531166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.077 qpair failed and we were unable to recover it. 00:35:55.077 [2024-10-13 01:46:40.541048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.077 [2024-10-13 01:46:40.541150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.077 [2024-10-13 01:46:40.541183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.077 [2024-10-13 01:46:40.541199] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.077 [2024-10-13 01:46:40.541211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.077 [2024-10-13 01:46:40.541241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.077 qpair failed and we were unable to recover it. 00:35:55.077 [2024-10-13 01:46:40.551041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.077 [2024-10-13 01:46:40.551143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.077 [2024-10-13 01:46:40.551170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.077 [2024-10-13 01:46:40.551185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.077 [2024-10-13 01:46:40.551197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.077 [2024-10-13 01:46:40.551226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.077 qpair failed and we were unable to recover it. 00:35:55.077 [2024-10-13 01:46:40.560990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.077 [2024-10-13 01:46:40.561084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.077 [2024-10-13 01:46:40.561109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.077 [2024-10-13 01:46:40.561124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.077 [2024-10-13 01:46:40.561137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.077 [2024-10-13 01:46:40.561166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.077 qpair failed and we were unable to recover it. 00:35:55.077 [2024-10-13 01:46:40.570983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.077 [2024-10-13 01:46:40.571078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.077 [2024-10-13 01:46:40.571104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.077 [2024-10-13 01:46:40.571117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.077 [2024-10-13 01:46:40.571130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.077 [2024-10-13 01:46:40.571159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.077 qpair failed and we were unable to recover it. 00:35:55.337 [2024-10-13 01:46:40.581017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.337 [2024-10-13 01:46:40.581116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.337 [2024-10-13 01:46:40.581143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.337 [2024-10-13 01:46:40.581158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.337 [2024-10-13 01:46:40.581180] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.337 [2024-10-13 01:46:40.581211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-10-13 01:46:40.591091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.337 [2024-10-13 01:46:40.591191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.337 [2024-10-13 01:46:40.591217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.337 [2024-10-13 01:46:40.591231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.337 [2024-10-13 01:46:40.591243] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.337 [2024-10-13 01:46:40.591285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-10-13 01:46:40.601101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.337 [2024-10-13 01:46:40.601195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.337 [2024-10-13 01:46:40.601221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.337 [2024-10-13 01:46:40.601235] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.337 [2024-10-13 01:46:40.601247] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.337 [2024-10-13 01:46:40.601276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-10-13 01:46:40.611128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.337 [2024-10-13 01:46:40.611222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.337 [2024-10-13 01:46:40.611247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.337 [2024-10-13 01:46:40.611267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.337 [2024-10-13 01:46:40.611280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.337 [2024-10-13 01:46:40.611309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-10-13 01:46:40.621188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.337 [2024-10-13 01:46:40.621282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.337 [2024-10-13 01:46:40.621306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.337 [2024-10-13 01:46:40.621320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.337 [2024-10-13 01:46:40.621332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.337 [2024-10-13 01:46:40.621361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-10-13 01:46:40.631188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.337 [2024-10-13 01:46:40.631288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.337 [2024-10-13 01:46:40.631315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.337 [2024-10-13 01:46:40.631329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.337 [2024-10-13 01:46:40.631341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.337 [2024-10-13 01:46:40.631382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-10-13 01:46:40.641201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.337 [2024-10-13 01:46:40.641286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.337 [2024-10-13 01:46:40.641311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.337 [2024-10-13 01:46:40.641326] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.337 [2024-10-13 01:46:40.641338] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.337 [2024-10-13 01:46:40.641368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-10-13 01:46:40.651212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.337 [2024-10-13 01:46:40.651305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.651330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.651344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.651357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.651386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.661267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.661361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.661388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.661402] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.661414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.661443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.671311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.671420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.671446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.671461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.671490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.671522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.681396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.681493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.681520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.681534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.681546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.681575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.691342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.691444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.691477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.691493] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.691506] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.691536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.701342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.701433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.701459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.701482] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.701497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.701527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.711381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.711481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.711507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.711520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.711533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.711562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.721432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.721537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.721564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.721579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.721591] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.721620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.731461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.731568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.731593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.731607] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.731619] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.731648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.741464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.741564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.741590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.741604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.741616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.741645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.751501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.751597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.751623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.751637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.751649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.751678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.761523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.761613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.761637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.761656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.761670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.338 [2024-10-13 01:46:40.761699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-10-13 01:46:40.771652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.338 [2024-10-13 01:46:40.771752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.338 [2024-10-13 01:46:40.771778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.338 [2024-10-13 01:46:40.771792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.338 [2024-10-13 01:46:40.771804] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.771834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.781581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.781670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.781694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.781708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.781720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.781749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.791615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.791707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.791732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.791746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.791758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.791787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.801656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.801744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.801769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.801782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.801796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.801837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.811815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.811909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.811939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.811955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.811968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.811999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.821715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.821811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.821838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.821853] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.821866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.821907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.831787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.831881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.831911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.831927] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.831939] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.831970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.841754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.841847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.841872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.841886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.841898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.841928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.851784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.851871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.851895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.851915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.851929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.851959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.861860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.861949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.861975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.861989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.862002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.862043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.871833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.871927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.871951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.871965] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.871978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.872008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.881850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.881944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.881970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.881984] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.881997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.882026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.891933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.339 [2024-10-13 01:46:40.892025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.339 [2024-10-13 01:46:40.892055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.339 [2024-10-13 01:46:40.892070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.339 [2024-10-13 01:46:40.892083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.339 [2024-10-13 01:46:40.892113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-10-13 01:46:40.901939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.340 [2024-10-13 01:46:40.902042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.340 [2024-10-13 01:46:40.902072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.340 [2024-10-13 01:46:40.902090] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.340 [2024-10-13 01:46:40.902102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.340 [2024-10-13 01:46:40.902132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-10-13 01:46:40.911959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.340 [2024-10-13 01:46:40.912052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.340 [2024-10-13 01:46:40.912082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.340 [2024-10-13 01:46:40.912097] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.340 [2024-10-13 01:46:40.912110] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.340 [2024-10-13 01:46:40.912139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:40.922103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:40.922203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:40.922231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:40.922246] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.614 [2024-10-13 01:46:40.922259] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.614 [2024-10-13 01:46:40.922289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:40.932022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:40.932116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:40.932140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:40.932154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.614 [2024-10-13 01:46:40.932166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.614 [2024-10-13 01:46:40.932196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:40.942062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:40.942188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:40.942223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:40.942239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.614 [2024-10-13 01:46:40.942252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.614 [2024-10-13 01:46:40.942282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:40.952060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:40.952176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:40.952202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:40.952216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.614 [2024-10-13 01:46:40.952230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.614 [2024-10-13 01:46:40.952259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:40.962141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:40.962252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:40.962278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:40.962292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.614 [2024-10-13 01:46:40.962305] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.614 [2024-10-13 01:46:40.962334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:40.972146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:40.972232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:40.972256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:40.972269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.614 [2024-10-13 01:46:40.972282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.614 [2024-10-13 01:46:40.972311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:40.982150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:40.982245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:40.982271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:40.982285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.614 [2024-10-13 01:46:40.982298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.614 [2024-10-13 01:46:40.982333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:40.992185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:40.992282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:40.992306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:40.992320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.614 [2024-10-13 01:46:40.992333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.614 [2024-10-13 01:46:40.992362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:41.002298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:41.002392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:41.002418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:41.002433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.614 [2024-10-13 01:46:41.002445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.614 [2024-10-13 01:46:41.002481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.614 qpair failed and we were unable to recover it. 00:35:55.614 [2024-10-13 01:46:41.012226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.614 [2024-10-13 01:46:41.012316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.614 [2024-10-13 01:46:41.012340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.614 [2024-10-13 01:46:41.012355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.012367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.012397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.022253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.022341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.022366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.022380] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.022393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.022423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.032286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.032385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.032419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.032435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.032447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.032489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.042346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.042440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.042479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.042496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.042509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.042552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.052363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.052454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.052486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.052501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.052513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.052544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.062386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.062485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.062510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.062524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.062537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.062580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.072440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.072541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.072568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.072582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.072595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.072642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.082418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.082518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.082545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.082559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.082571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.082600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.092486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.092575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.092610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.092624] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.092637] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.092667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.102515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.102607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.102633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.102647] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.102660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.102701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.112553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.112692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.112722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.112737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.112749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.112779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.122540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.122635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.122659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.122673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.122686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.122714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.132569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.132656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.132680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.132695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.132708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.132737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.142628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.142716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.142740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.142754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.142767] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.142796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.615 qpair failed and we were unable to recover it. 00:35:55.615 [2024-10-13 01:46:41.152673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.615 [2024-10-13 01:46:41.152764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.615 [2024-10-13 01:46:41.152788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.615 [2024-10-13 01:46:41.152802] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.615 [2024-10-13 01:46:41.152815] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.615 [2024-10-13 01:46:41.152844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-10-13 01:46:41.162659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.616 [2024-10-13 01:46:41.162754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.616 [2024-10-13 01:46:41.162780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.616 [2024-10-13 01:46:41.162794] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.616 [2024-10-13 01:46:41.162812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.616 [2024-10-13 01:46:41.162842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-10-13 01:46:41.172681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.616 [2024-10-13 01:46:41.172769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.616 [2024-10-13 01:46:41.172793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.616 [2024-10-13 01:46:41.172807] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.616 [2024-10-13 01:46:41.172819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.616 [2024-10-13 01:46:41.172848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.616 [2024-10-13 01:46:41.182714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.616 [2024-10-13 01:46:41.182798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.616 [2024-10-13 01:46:41.182823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.616 [2024-10-13 01:46:41.182837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.616 [2024-10-13 01:46:41.182850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.616 [2024-10-13 01:46:41.182880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.616 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.192750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.192848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.192873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.192887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.192899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.192928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.202762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.202863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.202888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.202902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.202914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.202944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.212820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.212917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.212941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.212955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.212967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.212997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.222911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.222993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.223018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.223032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.223043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.223072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.232864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.232958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.232982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.232996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.233008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.233037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.242875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.242971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.242995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.243009] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.243021] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.243050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.252897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.252983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.253007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.253027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.253040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.253069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.262930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.263021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.263045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.263059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.263072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.263101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.273015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.273107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.273132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.273146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.273158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.273199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.283040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.283136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.283163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.283178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.283191] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.283220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.875 [2024-10-13 01:46:41.293059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.875 [2024-10-13 01:46:41.293155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.875 [2024-10-13 01:46:41.293179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.875 [2024-10-13 01:46:41.293201] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.875 [2024-10-13 01:46:41.293213] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.875 [2024-10-13 01:46:41.293242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.875 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.303079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.303165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.303191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.303205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.303217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.303246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.313140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.313232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.313256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.313270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.313282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.313312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.323273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.323371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.323396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.323410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.323422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.323451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.333197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.333282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.333306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.333321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.333333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.333362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.343325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.343411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.343436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.343455] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.343468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.343509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.353291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.353396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.353421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.353434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.353447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.353487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.363261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.363346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.363371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.363385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.363397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.363426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.373288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.373404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.373430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.373444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.373456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.373493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.383304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.383386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.383411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.383426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.383439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.383468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.393368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.393485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.393523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.393538] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.393551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.393581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.403442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.403556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.403581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.403595] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.403607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.403636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.413393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.413483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.413519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.413534] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.413546] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.413576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.423414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.423501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.423526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.423540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.423553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.423582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.433480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.876 [2024-10-13 01:46:41.433573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.876 [2024-10-13 01:46:41.433603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.876 [2024-10-13 01:46:41.433618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.876 [2024-10-13 01:46:41.433630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.876 [2024-10-13 01:46:41.433660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.876 qpair failed and we were unable to recover it. 00:35:55.876 [2024-10-13 01:46:41.443457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:55.877 [2024-10-13 01:46:41.443553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:55.877 [2024-10-13 01:46:41.443579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:55.877 [2024-10-13 01:46:41.443593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:55.877 [2024-10-13 01:46:41.443605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:55.877 [2024-10-13 01:46:41.443635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:55.877 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.453532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.453621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.453648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.453662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.453674] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.453703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.463550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.463634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.463660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.463674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.463686] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.463727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.473660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.473750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.473774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.473788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.473801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.473836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.483692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.483816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.483842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.483856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.483869] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.483898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.493617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.493699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.493723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.493737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.493749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.493778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.503658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.503747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.503771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.503785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.503798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.503827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.513706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.513798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.513823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.513837] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.513850] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.513879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.523710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.523807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.523836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.523851] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.523864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.523893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.533768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.533868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.533894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.533909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.533922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.533952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.543832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.543947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.543974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.543989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.544002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.136 [2024-10-13 01:46:41.544031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.136 qpair failed and we were unable to recover it. 00:35:56.136 [2024-10-13 01:46:41.553924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.136 [2024-10-13 01:46:41.554015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.136 [2024-10-13 01:46:41.554040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.136 [2024-10-13 01:46:41.554054] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.136 [2024-10-13 01:46:41.554067] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.554096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.563823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.563914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.563941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.563955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.563967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.564002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.573877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.573961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.573986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.574000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.574012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.574041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.583939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.584031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.584060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.584076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.584089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.584120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.593958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.594078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.594103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.594117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.594131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.594160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.604041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.604178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.604203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.604218] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.604231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.604260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.613970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.614053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.614083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.614098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.614111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.614141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.624098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.624194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.624218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.624232] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.624245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.624273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.634101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.634205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.634230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.634244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.634257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.634286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.644097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.644182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.644207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.644221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.644234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.644278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.654180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.654269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.654293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.654308] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.654326] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.654356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.664097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.664183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.664208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.664222] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.664235] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.664264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.674141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.674228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.674253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.674267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.674280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.674309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.684176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.684261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.684286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.684300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.684313] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.684342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.137 [2024-10-13 01:46:41.694262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.137 [2024-10-13 01:46:41.694346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.137 [2024-10-13 01:46:41.694371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.137 [2024-10-13 01:46:41.694385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.137 [2024-10-13 01:46:41.694397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.137 [2024-10-13 01:46:41.694426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.137 qpair failed and we were unable to recover it. 00:35:56.138 [2024-10-13 01:46:41.704212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.138 [2024-10-13 01:46:41.704306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.138 [2024-10-13 01:46:41.704331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.138 [2024-10-13 01:46:41.704345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.138 [2024-10-13 01:46:41.704357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.138 [2024-10-13 01:46:41.704386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.138 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.714372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.714466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.714498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.714513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.714526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.714555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.724283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.724374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.724399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.724413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.724426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.724455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.734324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.734408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.734434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.734448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.734461] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.734499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.744366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.744496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.744521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.744535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.744553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.744584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.754395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.754493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.754518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.754532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.754545] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.754574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.764443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.764559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.764583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.764598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.764610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.764640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.774444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.774543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.774568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.774582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.774595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.774624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.784495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.784610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.784636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.784650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.784662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.784692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.794566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.794684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.794709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.794724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.794737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.794766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.804527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.804647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.804671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.804685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.804698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.804727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.814576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.814670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.398 [2024-10-13 01:46:41.814699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.398 [2024-10-13 01:46:41.814715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.398 [2024-10-13 01:46:41.814728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.398 [2024-10-13 01:46:41.814770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.398 qpair failed and we were unable to recover it. 00:35:56.398 [2024-10-13 01:46:41.824583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.398 [2024-10-13 01:46:41.824664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.824689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.824704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.824716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.824745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.834764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.834893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.834918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.834937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.834951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.834980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.844675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.844765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.844789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.844804] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.844817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.844846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.854654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.854739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.854764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.854779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.854791] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.854820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.864695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.864783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.864809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.864824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.864836] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.864865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.874807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.874947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.874972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.874986] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.874999] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.875027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.884780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.884913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.884937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.884951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.884965] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.884994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.894804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.894891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.894916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.894930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.894942] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.894971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.904901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.904988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.905014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.905028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.905040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.905069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.914841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.914931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.914956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.914970] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.914982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.915011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.924925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.925016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.925047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.925065] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.925078] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.925119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.934935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.935016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.935042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.935057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.935069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.935098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.944946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.945030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.945055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.399 [2024-10-13 01:46:41.945069] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.399 [2024-10-13 01:46:41.945082] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.399 [2024-10-13 01:46:41.945111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.399 qpair failed and we were unable to recover it. 00:35:56.399 [2024-10-13 01:46:41.954960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.399 [2024-10-13 01:46:41.955051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.399 [2024-10-13 01:46:41.955075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.400 [2024-10-13 01:46:41.955089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.400 [2024-10-13 01:46:41.955102] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.400 [2024-10-13 01:46:41.955130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.400 qpair failed and we were unable to recover it. 00:35:56.400 [2024-10-13 01:46:41.964983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.400 [2024-10-13 01:46:41.965072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.400 [2024-10-13 01:46:41.965096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.400 [2024-10-13 01:46:41.965110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.400 [2024-10-13 01:46:41.965123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.400 [2024-10-13 01:46:41.965152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.400 qpair failed and we were unable to recover it. 00:35:56.400 [2024-10-13 01:46:41.975047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.400 [2024-10-13 01:46:41.975130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.400 [2024-10-13 01:46:41.975155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.400 [2024-10-13 01:46:41.975170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.400 [2024-10-13 01:46:41.975182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.400 [2024-10-13 01:46:41.975211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.400 qpair failed and we were unable to recover it. 00:35:56.658 [2024-10-13 01:46:41.985056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.658 [2024-10-13 01:46:41.985140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.658 [2024-10-13 01:46:41.985165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.658 [2024-10-13 01:46:41.985180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:41.985193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:41.985222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:41.995074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:41.995165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:41.995189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:41.995203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:41.995216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:41.995245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.005131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.005272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.005299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.005314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.005327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.005356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.015218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.015300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.015331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.015346] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.015358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.015388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.025292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.025421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.025446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.025460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.025482] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.025514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.035211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.035310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.035335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.035349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.035362] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.035391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.045237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.045326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.045352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.045367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.045379] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.045420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.055327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.055448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.055479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.055496] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.055508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.055544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.065318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.065438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.065463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.065484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.065498] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.065528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.075335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.075434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.075459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.075483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.075497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.075527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.085426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.085568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.085593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.085608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.085620] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.085650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.095359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.095443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.095468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.095490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.095503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.095533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.105386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.105491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.105524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.105540] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.105553] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.105582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.115420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.115567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.115591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.115605] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.115618] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.659 [2024-10-13 01:46:42.115647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.659 qpair failed and we were unable to recover it. 00:35:56.659 [2024-10-13 01:46:42.125441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.659 [2024-10-13 01:46:42.125536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.659 [2024-10-13 01:46:42.125561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.659 [2024-10-13 01:46:42.125575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.659 [2024-10-13 01:46:42.125588] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.125617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.135485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.135623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.135649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.135664] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.135676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.135705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.145525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.145616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.145641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.145654] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.145672] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.145702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.155564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.155686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.155711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.155725] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.155737] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.155766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.165562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.165649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.165673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.165687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.165700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.165729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.175601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.175708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.175732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.175746] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.175759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.175800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.185626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.185709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.185734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.185748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.185761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.185790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.195699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.195846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.195872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.195887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.195899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.195940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.205714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.205800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.205828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.205849] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.205862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.205892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.215688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.215774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.215800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.215814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.215827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.215856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.225732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.225819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.225844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.225858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.225870] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.225899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.660 [2024-10-13 01:46:42.235770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.660 [2024-10-13 01:46:42.235864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.660 [2024-10-13 01:46:42.235889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.660 [2024-10-13 01:46:42.235904] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.660 [2024-10-13 01:46:42.235922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.660 [2024-10-13 01:46:42.235954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.660 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.245801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.245932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.245959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.245974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.245986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.246015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.255853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.255939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.255964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.255978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.255990] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.256019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.265823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.265905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.265929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.265943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.265956] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.265985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.275901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.275993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.276018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.276033] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.276045] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.276087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.285943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.286034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.286059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.286073] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.286085] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.286115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.295958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.296046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.296073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.296088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.296101] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.296131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.306083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.306173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.306198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.306211] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.306224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.306253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.316019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.316109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.316133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.316147] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.316160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.316204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.326092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.326173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.326197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.326216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.326230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.326258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.336058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.336153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.336182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.336198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.336210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.336240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.346087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.346220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.346247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.346262] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.346274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.346303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.356104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.356193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.356218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.356231] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.356244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.356273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.366147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.366268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.366293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.920 [2024-10-13 01:46:42.366306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.920 [2024-10-13 01:46:42.366318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.920 [2024-10-13 01:46:42.366347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.920 qpair failed and we were unable to recover it. 00:35:56.920 [2024-10-13 01:46:42.376174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.920 [2024-10-13 01:46:42.376257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.920 [2024-10-13 01:46:42.376281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.376295] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.376307] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.921 [2024-10-13 01:46:42.376336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.386282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.386397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.386455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.386486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.386502] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:35:56.921 [2024-10-13 01:46:42.386546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.396245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.396341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.396371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.396386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.396398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.396428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.406292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.406384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.406410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.406425] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.406437] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.406466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.416291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.416416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.416444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.416468] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.416500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.416533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.426301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.426391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.426417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.426431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.426443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.426480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.436343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.436436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.436461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.436481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.436495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.436524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.446358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.446448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.446481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.446497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.446510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.446539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.456392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.456501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.456529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.456544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.456557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.456587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.466418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.466505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.466531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.466545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.466557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.466586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.476456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.476599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.476626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.476641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.476654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.476683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.486491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.486578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.486607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.486622] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.486635] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.486663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:56.921 [2024-10-13 01:46:42.496532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.921 [2024-10-13 01:46:42.496645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.921 [2024-10-13 01:46:42.496675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.921 [2024-10-13 01:46:42.496690] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.921 [2024-10-13 01:46:42.496702] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:56.921 [2024-10-13 01:46:42.496733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:56.921 qpair failed and we were unable to recover it. 00:35:57.180 [2024-10-13 01:46:42.506551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.180 [2024-10-13 01:46:42.506639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.180 [2024-10-13 01:46:42.506671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.180 [2024-10-13 01:46:42.506687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.180 [2024-10-13 01:46:42.506700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.180 [2024-10-13 01:46:42.506729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.516567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.516656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.516682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.516696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.516708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.516736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.526636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.526762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.526789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.526803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.526816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.526844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.536617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.536697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.536721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.536734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.536747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.536774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.546670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.546788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.546815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.546829] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.546841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.546869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.556682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.556773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.556798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.556812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.556824] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.556852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.566696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.566784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.566808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.566823] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.566835] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.566864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.576781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.576869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.576893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.576907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.576920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.576948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.586756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.586876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.586902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.586916] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.586928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.586956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.596793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.596887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.596916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.596931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.596943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.596972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.606929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.607022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.607046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.607060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.607072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.607101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.616873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.616958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.616982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.616996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.617008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.617036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.626914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.626999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.627025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.627039] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.627051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.627080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.636925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.637019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.637044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.637059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.637071] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.637105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.646974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.181 [2024-10-13 01:46:42.647090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.181 [2024-10-13 01:46:42.647117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.181 [2024-10-13 01:46:42.647132] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.181 [2024-10-13 01:46:42.647145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.181 [2024-10-13 01:46:42.647173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.181 qpair failed and we were unable to recover it. 00:35:57.181 [2024-10-13 01:46:42.656979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.657072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.657098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.657113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.657125] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.657155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.667003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.667096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.667121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.667135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.667148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.667176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.677022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.677120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.677146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.677161] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.677173] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.677202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.687067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.687162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.687194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.687209] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.687222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.687250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.697080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.697175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.697205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.697219] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.697231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.697261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.707112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.707205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.707231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.707245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.707257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.707285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.717124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.717217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.717242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.717257] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.717270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.717298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.727137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.727229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.727255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.727269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.727282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.727316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.737209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.737341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.737368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.737383] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.737395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.737424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.747192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.747281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.747306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.747320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.747333] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.747361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.182 [2024-10-13 01:46:42.757268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.182 [2024-10-13 01:46:42.757382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.182 [2024-10-13 01:46:42.757411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.182 [2024-10-13 01:46:42.757426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.182 [2024-10-13 01:46:42.757439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.182 [2024-10-13 01:46:42.757478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.182 qpair failed and we were unable to recover it. 00:35:57.441 [2024-10-13 01:46:42.767279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.441 [2024-10-13 01:46:42.767385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.441 [2024-10-13 01:46:42.767412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.441 [2024-10-13 01:46:42.767427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.441 [2024-10-13 01:46:42.767439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.441 [2024-10-13 01:46:42.767468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.441 qpair failed and we were unable to recover it. 00:35:57.441 [2024-10-13 01:46:42.777287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.441 [2024-10-13 01:46:42.777384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.441 [2024-10-13 01:46:42.777420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.441 [2024-10-13 01:46:42.777436] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.441 [2024-10-13 01:46:42.777449] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.441 [2024-10-13 01:46:42.777497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.441 qpair failed and we were unable to recover it. 00:35:57.441 [2024-10-13 01:46:42.787353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.441 [2024-10-13 01:46:42.787480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.441 [2024-10-13 01:46:42.787512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.441 [2024-10-13 01:46:42.787529] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.441 [2024-10-13 01:46:42.787541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.441 [2024-10-13 01:46:42.787570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.441 qpair failed and we were unable to recover it. 00:35:57.441 [2024-10-13 01:46:42.797386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.441 [2024-10-13 01:46:42.797497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.441 [2024-10-13 01:46:42.797522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.441 [2024-10-13 01:46:42.797536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.797549] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.797578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.807389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.807527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.807554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.807569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.807581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.807610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.817449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.817594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.817622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.817636] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.817648] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.817682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.827530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.827629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.827657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.827672] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.827684] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.827714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.837462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.837578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.837604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.837618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.837630] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.837659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.847503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.847620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.847647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.847661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.847673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.847702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.857537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.857659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.857686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.857699] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.857712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.857741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.867570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.867692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.867722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.867736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.867748] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.867777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.877579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.877681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.877708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.877722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.877735] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.877764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.887589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.887695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.887722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.887737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.887749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.887777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.897671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.897794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.897821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.897835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.897847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.897875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.907665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.907751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.907775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.907789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.907801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.907835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.917690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.917819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.917844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.917859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.917871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.917899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.927693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.927788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.927813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.927827] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.927840] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.442 [2024-10-13 01:46:42.927868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.442 qpair failed and we were unable to recover it. 00:35:57.442 [2024-10-13 01:46:42.937750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.442 [2024-10-13 01:46:42.937846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.442 [2024-10-13 01:46:42.937872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.442 [2024-10-13 01:46:42.937886] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.442 [2024-10-13 01:46:42.937898] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.443 [2024-10-13 01:46:42.937926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.443 qpair failed and we were unable to recover it. 00:35:57.443 [2024-10-13 01:46:42.947743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.443 [2024-10-13 01:46:42.947857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.443 [2024-10-13 01:46:42.947883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.443 [2024-10-13 01:46:42.947897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.443 [2024-10-13 01:46:42.947909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.443 [2024-10-13 01:46:42.947937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.443 qpair failed and we were unable to recover it. 00:35:57.443 [2024-10-13 01:46:42.957919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.443 [2024-10-13 01:46:42.958021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.443 [2024-10-13 01:46:42.958052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.443 [2024-10-13 01:46:42.958067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.443 [2024-10-13 01:46:42.958079] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.443 [2024-10-13 01:46:42.958108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.443 qpair failed and we were unable to recover it. 00:35:57.443 [2024-10-13 01:46:42.967825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.443 [2024-10-13 01:46:42.967937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.443 [2024-10-13 01:46:42.967963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.443 [2024-10-13 01:46:42.967977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.443 [2024-10-13 01:46:42.967989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.443 [2024-10-13 01:46:42.968018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.443 qpair failed and we were unable to recover it. 00:35:57.443 [2024-10-13 01:46:42.977873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.443 [2024-10-13 01:46:42.977970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.443 [2024-10-13 01:46:42.977996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.443 [2024-10-13 01:46:42.978010] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.443 [2024-10-13 01:46:42.978022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.443 [2024-10-13 01:46:42.978049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.443 qpair failed and we were unable to recover it. 00:35:57.443 [2024-10-13 01:46:42.987867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.443 [2024-10-13 01:46:42.988003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.443 [2024-10-13 01:46:42.988029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.443 [2024-10-13 01:46:42.988043] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.443 [2024-10-13 01:46:42.988055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.443 [2024-10-13 01:46:42.988083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.443 qpair failed and we were unable to recover it. 00:35:57.443 [2024-10-13 01:46:42.997948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.443 [2024-10-13 01:46:42.998045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.443 [2024-10-13 01:46:42.998069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.443 [2024-10-13 01:46:42.998083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.443 [2024-10-13 01:46:42.998100] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.443 [2024-10-13 01:46:42.998129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.443 qpair failed and we were unable to recover it. 00:35:57.443 [2024-10-13 01:46:43.007964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.443 [2024-10-13 01:46:43.008062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.443 [2024-10-13 01:46:43.008088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.443 [2024-10-13 01:46:43.008102] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.443 [2024-10-13 01:46:43.008114] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.443 [2024-10-13 01:46:43.008143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.443 qpair failed and we were unable to recover it. 00:35:57.443 [2024-10-13 01:46:43.017947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.443 [2024-10-13 01:46:43.018036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.443 [2024-10-13 01:46:43.018063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.443 [2024-10-13 01:46:43.018078] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.443 [2024-10-13 01:46:43.018091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.443 [2024-10-13 01:46:43.018121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.443 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.028091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.028189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.028221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.028237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.028250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.028280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.038033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.038134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.038164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.038180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.038193] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.038222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.048017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.048116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.048140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.048154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.048166] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.048194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.058080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.058208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.058235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.058249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.058261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.058289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.068102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.068229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.068255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.068269] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.068282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.068309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.078179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.078285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.078315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.078331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.078343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.078373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.088165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.088263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.088290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.088305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.088322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.088351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.098209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.098307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.098333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.098348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.098360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.098388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.108215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.108349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.108375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.108389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.108402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.108430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.118228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.118314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.118337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.702 [2024-10-13 01:46:43.118351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.702 [2024-10-13 01:46:43.118363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.702 [2024-10-13 01:46:43.118392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.702 qpair failed and we were unable to recover it. 00:35:57.702 [2024-10-13 01:46:43.128326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.702 [2024-10-13 01:46:43.128419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.702 [2024-10-13 01:46:43.128443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.128457] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.128476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.128507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.138288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.138393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.138419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.138434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.138445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.138480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.148324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.148413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.148437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.148451] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.148464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.148500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.158381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.158504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.158531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.158545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.158558] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.158586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.168397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.168532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.168559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.168573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.168586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.168614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.178398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.178515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.178540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.178555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.178575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.178604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.188450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.188583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.188609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.188623] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.188634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.188662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.198516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.198620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.198644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.198658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.198671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.198698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.208506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.208590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.208620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.208635] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.208647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.208675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.218530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.218616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.218641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.218655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.218667] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.218695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.228554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.228672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.228698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.228712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.228724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.228752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.238620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.238753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.238779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.238793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.238805] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.238833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.248631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.248726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.248752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.248766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.248779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.248808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.258682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.258787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.258815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.258830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.703 [2024-10-13 01:46:43.258843] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.703 [2024-10-13 01:46:43.258871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.703 qpair failed and we were unable to recover it. 00:35:57.703 [2024-10-13 01:46:43.268641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.703 [2024-10-13 01:46:43.268728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.703 [2024-10-13 01:46:43.268752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.703 [2024-10-13 01:46:43.268766] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.704 [2024-10-13 01:46:43.268783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.704 [2024-10-13 01:46:43.268813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.704 qpair failed and we were unable to recover it. 00:35:57.704 [2024-10-13 01:46:43.278705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.704 [2024-10-13 01:46:43.278806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.704 [2024-10-13 01:46:43.278835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.704 [2024-10-13 01:46:43.278850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.704 [2024-10-13 01:46:43.278862] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.704 [2024-10-13 01:46:43.278892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.704 qpair failed and we were unable to recover it. 00:35:57.962 [2024-10-13 01:46:43.288742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.288838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.288866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.288882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.288894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.288923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.298806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.298912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.298942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.298958] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.298970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.298998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.308755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.308847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.308873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.308887] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.308899] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.308928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.318824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.318922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.318948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.318963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.318975] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.319002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.329021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.329129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.329156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.329169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.329182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.329210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.338934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.339023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.339048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.339062] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.339074] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.339103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.348983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.349087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.349113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.349126] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.349138] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.349166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.359082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.359174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.359198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.359217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.359231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.359259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.368961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.369091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.369116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.369131] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.369143] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.369171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.379009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.379119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.379144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.379158] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.379170] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.379199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.389061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.389149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.389175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.389189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.389201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.389229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.399045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.399137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.399162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.399176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.399188] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.399215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.409103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.409195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.409220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.409234] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.409246] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.409273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.963 [2024-10-13 01:46:43.419132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.963 [2024-10-13 01:46:43.419264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.963 [2024-10-13 01:46:43.419293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.963 [2024-10-13 01:46:43.419309] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.963 [2024-10-13 01:46:43.419321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.963 [2024-10-13 01:46:43.419351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.963 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.429137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.429227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.429254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.429268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.429281] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.429309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.439186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.439278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.439304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.439318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.439330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.439358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.449199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.449289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.449313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.449334] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.449347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.449375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.459219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.459316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.459341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.459355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.459367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.459395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.469281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.469387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.469412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.469426] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.469438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.469466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.479289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.479394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.479418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.479432] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.479444] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.479478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.489314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.489410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.489434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.489448] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.489460] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.489495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.499368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.499455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.499487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.499502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.499515] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.499543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.509354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.509444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.509469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.509491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.509503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.509531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.519429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.519543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.519568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.519582] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.519594] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.519622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.529531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.529619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.529644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.529658] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.529670] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.529699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:57.964 [2024-10-13 01:46:43.539466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.964 [2024-10-13 01:46:43.539590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.964 [2024-10-13 01:46:43.539620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.964 [2024-10-13 01:46:43.539640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.964 [2024-10-13 01:46:43.539654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:57.964 [2024-10-13 01:46:43.539683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:57.964 qpair failed and we were unable to recover it. 00:35:58.223 [2024-10-13 01:46:43.549580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.223 [2024-10-13 01:46:43.549682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.223 [2024-10-13 01:46:43.549708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.223 [2024-10-13 01:46:43.549723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.223 [2024-10-13 01:46:43.549736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.223 [2024-10-13 01:46:43.549765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.223 qpair failed and we were unable to recover it. 00:35:58.223 [2024-10-13 01:46:43.559546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.223 [2024-10-13 01:46:43.559639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.223 [2024-10-13 01:46:43.559664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.223 [2024-10-13 01:46:43.559678] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.223 [2024-10-13 01:46:43.559690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.223 [2024-10-13 01:46:43.559719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.223 qpair failed and we were unable to recover it. 00:35:58.223 [2024-10-13 01:46:43.569631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.223 [2024-10-13 01:46:43.569753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.223 [2024-10-13 01:46:43.569777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.223 [2024-10-13 01:46:43.569791] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.223 [2024-10-13 01:46:43.569803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.223 [2024-10-13 01:46:43.569831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.223 qpair failed and we were unable to recover it. 00:35:58.223 [2024-10-13 01:46:43.579577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.223 [2024-10-13 01:46:43.579672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.223 [2024-10-13 01:46:43.579697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.223 [2024-10-13 01:46:43.579711] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.223 [2024-10-13 01:46:43.579724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.579752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.589605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.589689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.589713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.589726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.589738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.589767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.599660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.599772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.599799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.599813] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.599826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.599855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.609657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.609748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.609772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.609785] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.609799] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.609827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.619678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.619780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.619804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.619818] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.619830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.619859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.629741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.629830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.629859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.629879] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.629893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.629922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.639794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.639892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.639917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.639931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.639944] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.639972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.649874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.649992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.650020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.650036] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.650049] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.650078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.659813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.659944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.659971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.659985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.659998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.660027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.669824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.669931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.669956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.669969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.669981] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.670010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.679940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.680028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.680053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.680067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.680080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.680108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.689894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.690018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.690042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.690056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.690069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.690097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.699895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.699987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.700013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.700027] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.700040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.700067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.709921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.710058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.710082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.710096] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.710108] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.710136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.720021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.224 [2024-10-13 01:46:43.720116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.224 [2024-10-13 01:46:43.720149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.224 [2024-10-13 01:46:43.720165] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.224 [2024-10-13 01:46:43.720177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.224 [2024-10-13 01:46:43.720206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.224 qpair failed and we were unable to recover it. 00:35:58.224 [2024-10-13 01:46:43.730116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.225 [2024-10-13 01:46:43.730203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.225 [2024-10-13 01:46:43.730228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.225 [2024-10-13 01:46:43.730242] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.225 [2024-10-13 01:46:43.730253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.225 [2024-10-13 01:46:43.730282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.225 qpair failed and we were unable to recover it. 00:35:58.225 [2024-10-13 01:46:43.740070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.225 [2024-10-13 01:46:43.740191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.225 [2024-10-13 01:46:43.740215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.225 [2024-10-13 01:46:43.740229] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.225 [2024-10-13 01:46:43.740241] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.225 [2024-10-13 01:46:43.740269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.225 qpair failed and we were unable to recover it. 00:35:58.225 [2024-10-13 01:46:43.750033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.225 [2024-10-13 01:46:43.750113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.225 [2024-10-13 01:46:43.750138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.225 [2024-10-13 01:46:43.750152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.225 [2024-10-13 01:46:43.750164] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.225 [2024-10-13 01:46:43.750192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.225 qpair failed and we were unable to recover it. 00:35:58.225 [2024-10-13 01:46:43.760083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.225 [2024-10-13 01:46:43.760213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.225 [2024-10-13 01:46:43.760237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.225 [2024-10-13 01:46:43.760251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.225 [2024-10-13 01:46:43.760263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.225 [2024-10-13 01:46:43.760291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.225 qpair failed and we were unable to recover it. 00:35:58.225 [2024-10-13 01:46:43.770133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.225 [2024-10-13 01:46:43.770267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.225 [2024-10-13 01:46:43.770291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.225 [2024-10-13 01:46:43.770305] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.225 [2024-10-13 01:46:43.770318] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.225 [2024-10-13 01:46:43.770346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.225 qpair failed and we were unable to recover it. 00:35:58.225 [2024-10-13 01:46:43.780181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.225 [2024-10-13 01:46:43.780273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.225 [2024-10-13 01:46:43.780296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.225 [2024-10-13 01:46:43.780310] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.225 [2024-10-13 01:46:43.780322] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.225 [2024-10-13 01:46:43.780351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.225 qpair failed and we were unable to recover it. 00:35:58.225 [2024-10-13 01:46:43.790176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.225 [2024-10-13 01:46:43.790282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.225 [2024-10-13 01:46:43.790306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.225 [2024-10-13 01:46:43.790320] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.225 [2024-10-13 01:46:43.790332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.225 [2024-10-13 01:46:43.790360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.225 qpair failed and we were unable to recover it. 00:35:58.225 [2024-10-13 01:46:43.800239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.225 [2024-10-13 01:46:43.800327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.225 [2024-10-13 01:46:43.800354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.225 [2024-10-13 01:46:43.800370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.225 [2024-10-13 01:46:43.800382] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.225 [2024-10-13 01:46:43.800412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.225 qpair failed and we were unable to recover it. 00:35:58.484 [2024-10-13 01:46:43.810265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.484 [2024-10-13 01:46:43.810383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.484 [2024-10-13 01:46:43.810417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.484 [2024-10-13 01:46:43.810434] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.484 [2024-10-13 01:46:43.810447] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.484 [2024-10-13 01:46:43.810482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.484 qpair failed and we were unable to recover it. 00:35:58.484 [2024-10-13 01:46:43.820271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.484 [2024-10-13 01:46:43.820373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.484 [2024-10-13 01:46:43.820398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.484 [2024-10-13 01:46:43.820412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.484 [2024-10-13 01:46:43.820425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.484 [2024-10-13 01:46:43.820454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.484 qpair failed and we were unable to recover it. 00:35:58.484 [2024-10-13 01:46:43.830329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.484 [2024-10-13 01:46:43.830421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.484 [2024-10-13 01:46:43.830446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.484 [2024-10-13 01:46:43.830460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.484 [2024-10-13 01:46:43.830480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.484 [2024-10-13 01:46:43.830511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.484 qpair failed and we were unable to recover it. 00:35:58.484 [2024-10-13 01:46:43.840353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.484 [2024-10-13 01:46:43.840461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.484 [2024-10-13 01:46:43.840496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.484 [2024-10-13 01:46:43.840511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.484 [2024-10-13 01:46:43.840524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.484 [2024-10-13 01:46:43.840552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.484 qpair failed and we were unable to recover it. 00:35:58.484 [2024-10-13 01:46:43.850384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.484 [2024-10-13 01:46:43.850486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.484 [2024-10-13 01:46:43.850521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.484 [2024-10-13 01:46:43.850535] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.484 [2024-10-13 01:46:43.850547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.484 [2024-10-13 01:46:43.850580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.484 qpair failed and we were unable to recover it. 00:35:58.484 [2024-10-13 01:46:43.860434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.484 [2024-10-13 01:46:43.860530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.484 [2024-10-13 01:46:43.860555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.484 [2024-10-13 01:46:43.860569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.484 [2024-10-13 01:46:43.860582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.484 [2024-10-13 01:46:43.860610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.484 qpair failed and we were unable to recover it. 00:35:58.484 [2024-10-13 01:46:43.870442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.484 [2024-10-13 01:46:43.870549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.484 [2024-10-13 01:46:43.870574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.484 [2024-10-13 01:46:43.870589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.484 [2024-10-13 01:46:43.870601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.484 [2024-10-13 01:46:43.870630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.484 qpair failed and we were unable to recover it. 00:35:58.484 [2024-10-13 01:46:43.880529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.484 [2024-10-13 01:46:43.880642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.484 [2024-10-13 01:46:43.880674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.484 [2024-10-13 01:46:43.880688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.484 [2024-10-13 01:46:43.880700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.484 [2024-10-13 01:46:43.880728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.484 qpair failed and we were unable to recover it. 00:35:58.484 [2024-10-13 01:46:43.890494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.484 [2024-10-13 01:46:43.890595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.484 [2024-10-13 01:46:43.890619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.890634] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.890647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.890676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.900533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.900651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.900688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.900704] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.900717] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.900746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.910544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.910628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.910653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.910666] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.910679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.910707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.920633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.920753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.920779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.920793] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.920806] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.920834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.930625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.930751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.930776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.930790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.930802] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.930830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.940636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.940723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.940747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.940761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.940773] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.940807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.950658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.950751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.950776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.950790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.950803] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.950831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.960765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.960869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.960893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.960907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.960920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.960948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.970757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.970846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.970870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.970884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.970896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.970924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.980803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.980898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.980922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.980936] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.980948] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.980977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:43.990782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:43.990865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:43.990895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:43.990910] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:43.990922] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:43.990950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:44.000957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:44.001047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:44.001072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:44.001086] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:44.001099] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:44.001127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:44.010839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:44.010962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:44.010989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:44.011004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:44.011017] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:44.011045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:44.020873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:44.020956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:44.020981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:44.020995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.485 [2024-10-13 01:46:44.021007] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.485 [2024-10-13 01:46:44.021036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.485 qpair failed and we were unable to recover it. 00:35:58.485 [2024-10-13 01:46:44.030907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.485 [2024-10-13 01:46:44.030995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.485 [2024-10-13 01:46:44.031020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.485 [2024-10-13 01:46:44.031034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.486 [2024-10-13 01:46:44.031047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.486 [2024-10-13 01:46:44.031079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.486 qpair failed and we were unable to recover it. 00:35:58.486 [2024-10-13 01:46:44.040936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.486 [2024-10-13 01:46:44.041056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.486 [2024-10-13 01:46:44.041080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.486 [2024-10-13 01:46:44.041094] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.486 [2024-10-13 01:46:44.041107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.486 [2024-10-13 01:46:44.041135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.486 qpair failed and we were unable to recover it. 00:35:58.486 [2024-10-13 01:46:44.050970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.486 [2024-10-13 01:46:44.051089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.486 [2024-10-13 01:46:44.051113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.486 [2024-10-13 01:46:44.051127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.486 [2024-10-13 01:46:44.051140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.486 [2024-10-13 01:46:44.051168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.486 qpair failed and we were unable to recover it. 00:35:58.486 [2024-10-13 01:46:44.061011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.486 [2024-10-13 01:46:44.061095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.486 [2024-10-13 01:46:44.061122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.486 [2024-10-13 01:46:44.061137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.486 [2024-10-13 01:46:44.061149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.486 [2024-10-13 01:46:44.061179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.486 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.071006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.071093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.071120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.071135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.071147] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.071177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.081052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.081143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.081173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.081188] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.081201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.081229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.091062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.091149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.091174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.091189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.091201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.091230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.101075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.101165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.101190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.101204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.101216] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.101244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.111129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.111216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.111245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.111260] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.111272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.111300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.121186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.121275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.121300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.121314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.121327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.121360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.131212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.131339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.131364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.131379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.131391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.131419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.141211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.141301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.141326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.141340] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.141352] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.141380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.151236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.151322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.151347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.151362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.151374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.151402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.161282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.161374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.161399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.161413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.161425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.748 [2024-10-13 01:46:44.161454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.748 qpair failed and we were unable to recover it. 00:35:58.748 [2024-10-13 01:46:44.171300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.748 [2024-10-13 01:46:44.171387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.748 [2024-10-13 01:46:44.171416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.748 [2024-10-13 01:46:44.171431] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.748 [2024-10-13 01:46:44.171443] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.171480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.181329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.181414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.181439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.181453] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.181465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.181502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.191378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.191462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.191493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.191508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.191521] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.191549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.201431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.201550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.201575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.201589] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.201601] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.201629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.211406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.211497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.211523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.211537] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.211554] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.211583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.221451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.221552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.221576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.221591] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.221603] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.221631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.231484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.231570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.231596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.231610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.231622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.231650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.241524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.241617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.241641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.241655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.241668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.241696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.251523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.251614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.251638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.251652] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.251664] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.251693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.261552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.261679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.261704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.261718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.261730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.261759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.271629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.271716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.271740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.271754] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.271766] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.271794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.281632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.281717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.281742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.281756] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.281768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.281796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.291642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.291733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.291757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.291772] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.291784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.291812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.301679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.301774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.301799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.301812] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.301830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.301859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.749 qpair failed and we were unable to recover it. 00:35:58.749 [2024-10-13 01:46:44.311717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.749 [2024-10-13 01:46:44.311803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.749 [2024-10-13 01:46:44.311827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.749 [2024-10-13 01:46:44.311842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.749 [2024-10-13 01:46:44.311854] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.749 [2024-10-13 01:46:44.311882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.750 qpair failed and we were unable to recover it. 00:35:58.750 [2024-10-13 01:46:44.321768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.750 [2024-10-13 01:46:44.321861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.750 [2024-10-13 01:46:44.321886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.750 [2024-10-13 01:46:44.321900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.750 [2024-10-13 01:46:44.321913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:58.750 [2024-10-13 01:46:44.321941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:58.750 qpair failed and we were unable to recover it. 00:35:59.009 [2024-10-13 01:46:44.331760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.331878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.331903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.331917] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.331930] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.331958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.341830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.341913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.341937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.341952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.341964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.341992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.351797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.351890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.351914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.351929] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.351941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.351969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.361870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.361959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.361984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.361998] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.362010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.362038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.371917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.372002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.372027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.372041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.372053] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.372082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.381897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.381995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.382020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.382034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.382047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.382075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.391935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.392021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.392044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.392057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.392076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.392104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.401975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.402087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.402113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.402127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.402140] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.402168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.412065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.412153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.412178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.412191] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.412204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.412233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.422018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.422108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.422135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.422150] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.422162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.422190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.432051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.432135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.432159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.432172] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.432184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.432212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.442048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.442143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.442167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.442181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.442194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.442221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.452110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.452247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.452274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.452288] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.452300] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.452329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.462112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.010 [2024-10-13 01:46:44.462212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.010 [2024-10-13 01:46:44.462238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.010 [2024-10-13 01:46:44.462252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.010 [2024-10-13 01:46:44.462264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.010 [2024-10-13 01:46:44.462292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.010 qpair failed and we were unable to recover it. 00:35:59.010 [2024-10-13 01:46:44.472129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.472214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.472238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.472251] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.472263] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.472292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.482220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.482348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.482374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.482388] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.482406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.482435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.492227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.492346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.492372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.492386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.492399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.492427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.502239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.502335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.502359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.502373] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.502385] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.502413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.512259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.512381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.512407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.512421] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.512433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.512461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.522337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.522483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.522511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.522525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.522538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.522566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.532349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.532434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.532459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.532487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.532503] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.532531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.542350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.542491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.542518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.542532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.542544] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.542572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.552381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.552466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.552497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.552511] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.552523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.552551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.562452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.562557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.562582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.562596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.562608] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.562637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.572467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.572595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.572621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.572640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.572654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.572682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.011 [2024-10-13 01:46:44.582457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.011 [2024-10-13 01:46:44.582555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.011 [2024-10-13 01:46:44.582579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.011 [2024-10-13 01:46:44.582593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.011 [2024-10-13 01:46:44.582605] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.011 [2024-10-13 01:46:44.582633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.011 qpair failed and we were unable to recover it. 00:35:59.270 [2024-10-13 01:46:44.592494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.270 [2024-10-13 01:46:44.592583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.270 [2024-10-13 01:46:44.592608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.270 [2024-10-13 01:46:44.592621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.270 [2024-10-13 01:46:44.592634] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.270 [2024-10-13 01:46:44.592663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.270 qpair failed and we were unable to recover it. 00:35:59.270 [2024-10-13 01:46:44.602559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.270 [2024-10-13 01:46:44.602654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.270 [2024-10-13 01:46:44.602678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.270 [2024-10-13 01:46:44.602692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.270 [2024-10-13 01:46:44.602704] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.270 [2024-10-13 01:46:44.602732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.270 qpair failed and we were unable to recover it. 00:35:59.270 [2024-10-13 01:46:44.612551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.270 [2024-10-13 01:46:44.612633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.270 [2024-10-13 01:46:44.612656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.270 [2024-10-13 01:46:44.612670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.270 [2024-10-13 01:46:44.612682] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.270 [2024-10-13 01:46:44.612710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.270 qpair failed and we were unable to recover it. 00:35:59.270 [2024-10-13 01:46:44.622575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.622661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.622686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.622700] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.622712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.622740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.632600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.632684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.632709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.632723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.632736] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.632764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.642655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.642746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.642770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.642784] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.642796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.642825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.652663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.652748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.652772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.652786] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.652798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.652827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.662693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.662777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.662802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.662821] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.662834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.662862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.672742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.672865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.672891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.672905] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.672917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.672945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.682769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.682860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.682884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.682897] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.682910] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.682938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.692806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.692906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.692935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.692951] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.692963] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.692994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.702877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.702983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.703011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.703025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.703037] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.703065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.712826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.712933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.712959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.712974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.712986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.713014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.722879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.722980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.723006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.723020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.723032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.723061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.732919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.733001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.733024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.733038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.733050] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.733078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.743002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.743089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.743113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.271 [2024-10-13 01:46:44.743127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.271 [2024-10-13 01:46:44.743139] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.271 [2024-10-13 01:46:44.743167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.271 qpair failed and we were unable to recover it. 00:35:59.271 [2024-10-13 01:46:44.752942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.271 [2024-10-13 01:46:44.753028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.271 [2024-10-13 01:46:44.753052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.753071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.753084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.753113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.272 [2024-10-13 01:46:44.762973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.272 [2024-10-13 01:46:44.763063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.272 [2024-10-13 01:46:44.763088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.763101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.763113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.763141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.272 [2024-10-13 01:46:44.773085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.272 [2024-10-13 01:46:44.773167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.272 [2024-10-13 01:46:44.773191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.773205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.773217] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.773245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.272 [2024-10-13 01:46:44.783018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.272 [2024-10-13 01:46:44.783099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.272 [2024-10-13 01:46:44.783123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.783137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.783150] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.783177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.272 [2024-10-13 01:46:44.793064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.272 [2024-10-13 01:46:44.793148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.272 [2024-10-13 01:46:44.793172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.793185] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.793198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.793226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.272 [2024-10-13 01:46:44.803098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.272 [2024-10-13 01:46:44.803190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.272 [2024-10-13 01:46:44.803213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.803227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.803239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.803268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.272 [2024-10-13 01:46:44.813164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.272 [2024-10-13 01:46:44.813258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.272 [2024-10-13 01:46:44.813282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.813296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.813308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.813336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.272 [2024-10-13 01:46:44.823146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.272 [2024-10-13 01:46:44.823242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.272 [2024-10-13 01:46:44.823266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.823280] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.823293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.823320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.272 [2024-10-13 01:46:44.833156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.272 [2024-10-13 01:46:44.833240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.272 [2024-10-13 01:46:44.833263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.833276] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.833289] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.833318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.272 [2024-10-13 01:46:44.843194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.272 [2024-10-13 01:46:44.843289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.272 [2024-10-13 01:46:44.843313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.272 [2024-10-13 01:46:44.843333] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.272 [2024-10-13 01:46:44.843346] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.272 [2024-10-13 01:46:44.843373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.272 qpair failed and we were unable to recover it. 00:35:59.531 [2024-10-13 01:46:44.853216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.531 [2024-10-13 01:46:44.853304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.531 [2024-10-13 01:46:44.853328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.531 [2024-10-13 01:46:44.853342] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.531 [2024-10-13 01:46:44.853354] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.531 [2024-10-13 01:46:44.853382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.531 qpair failed and we were unable to recover it. 00:35:59.531 [2024-10-13 01:46:44.863328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.531 [2024-10-13 01:46:44.863427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.531 [2024-10-13 01:46:44.863452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.531 [2024-10-13 01:46:44.863465] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.531 [2024-10-13 01:46:44.863486] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.531 [2024-10-13 01:46:44.863515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.531 qpair failed and we were unable to recover it. 00:35:59.531 [2024-10-13 01:46:44.873285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.531 [2024-10-13 01:46:44.873373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.531 [2024-10-13 01:46:44.873398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.531 [2024-10-13 01:46:44.873412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.531 [2024-10-13 01:46:44.873424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.531 [2024-10-13 01:46:44.873452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.531 qpair failed and we were unable to recover it. 00:35:59.531 [2024-10-13 01:46:44.883367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.531 [2024-10-13 01:46:44.883485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.531 [2024-10-13 01:46:44.883510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.531 [2024-10-13 01:46:44.883524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.531 [2024-10-13 01:46:44.883536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.531 [2024-10-13 01:46:44.883565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.531 qpair failed and we were unable to recover it. 00:35:59.531 [2024-10-13 01:46:44.893365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.531 [2024-10-13 01:46:44.893458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.531 [2024-10-13 01:46:44.893493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.531 [2024-10-13 01:46:44.893509] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.531 [2024-10-13 01:46:44.893522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.531 [2024-10-13 01:46:44.893552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.531 qpair failed and we were unable to recover it. 00:35:59.531 [2024-10-13 01:46:44.903409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.531 [2024-10-13 01:46:44.903497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.531 [2024-10-13 01:46:44.903522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.531 [2024-10-13 01:46:44.903536] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.531 [2024-10-13 01:46:44.903548] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.531 [2024-10-13 01:46:44.903577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.531 qpair failed and we were unable to recover it. 00:35:59.531 [2024-10-13 01:46:44.913426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.531 [2024-10-13 01:46:44.913513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.531 [2024-10-13 01:46:44.913537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:44.913551] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:44.913563] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:44.913592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:44.923449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:44.923574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:44.923602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:44.923616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:44.923628] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:44.923656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:44.933479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:44.933567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:44.933596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:44.933611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:44.933623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:44.933651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:44.943502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:44.943581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:44.943605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:44.943619] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:44.943631] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:44.943659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:44.953526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:44.953613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:44.953636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:44.953650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:44.953662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:44.953691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:44.963549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:44.963642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:44.963666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:44.963681] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:44.963694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:44.963722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:44.973572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:44.973657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:44.973682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:44.973696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:44.973708] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:44.973736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:44.983602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:44.983694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:44.983719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:44.983733] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:44.983746] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:44.983781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:44.993631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:44.993743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:44.993766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:44.993780] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:44.993793] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:44.993821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:45.003675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:45.003768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:45.003791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:45.003806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:45.003817] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:45.003846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:45.013701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:45.013795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:45.013822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:45.013836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:45.013848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:45.013876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:45.023821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:45.023907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:45.023937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:45.023952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:45.023964] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:45.023992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:45.033759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.532 [2024-10-13 01:46:45.033863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.532 [2024-10-13 01:46:45.033887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.532 [2024-10-13 01:46:45.033900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.532 [2024-10-13 01:46:45.033913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.532 [2024-10-13 01:46:45.033941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.532 qpair failed and we were unable to recover it. 00:35:59.532 [2024-10-13 01:46:45.043782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.533 [2024-10-13 01:46:45.043874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.533 [2024-10-13 01:46:45.043898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.533 [2024-10-13 01:46:45.043913] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.533 [2024-10-13 01:46:45.043925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.533 [2024-10-13 01:46:45.043953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.533 qpair failed and we were unable to recover it. 00:35:59.533 [2024-10-13 01:46:45.053853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.533 [2024-10-13 01:46:45.053947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.533 [2024-10-13 01:46:45.053973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.533 [2024-10-13 01:46:45.053988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.533 [2024-10-13 01:46:45.054000] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.533 [2024-10-13 01:46:45.054028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.533 qpair failed and we were unable to recover it. 00:35:59.533 [2024-10-13 01:46:45.063833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.533 [2024-10-13 01:46:45.063920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.533 [2024-10-13 01:46:45.063944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.533 [2024-10-13 01:46:45.063957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.533 [2024-10-13 01:46:45.063969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.533 [2024-10-13 01:46:45.064003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.533 qpair failed and we were unable to recover it. 00:35:59.533 [2024-10-13 01:46:45.073882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.533 [2024-10-13 01:46:45.073959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.533 [2024-10-13 01:46:45.073985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.533 [2024-10-13 01:46:45.073999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.533 [2024-10-13 01:46:45.074011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.533 [2024-10-13 01:46:45.074038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.533 qpair failed and we were unable to recover it. 00:35:59.533 [2024-10-13 01:46:45.083888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.533 [2024-10-13 01:46:45.083983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.533 [2024-10-13 01:46:45.084008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.533 [2024-10-13 01:46:45.084022] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.533 [2024-10-13 01:46:45.084034] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.533 [2024-10-13 01:46:45.084061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.533 qpair failed and we were unable to recover it. 00:35:59.533 [2024-10-13 01:46:45.093977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.533 [2024-10-13 01:46:45.094074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.533 [2024-10-13 01:46:45.094104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.533 [2024-10-13 01:46:45.094119] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.533 [2024-10-13 01:46:45.094131] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.533 [2024-10-13 01:46:45.094159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.533 qpair failed and we were unable to recover it. 00:35:59.533 [2024-10-13 01:46:45.104071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.533 [2024-10-13 01:46:45.104166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.533 [2024-10-13 01:46:45.104192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.533 [2024-10-13 01:46:45.104206] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.533 [2024-10-13 01:46:45.104218] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.533 [2024-10-13 01:46:45.104246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.533 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.114062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.114151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.114181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.114203] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.114215] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.792 [2024-10-13 01:46:45.114244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.792 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.124057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.124197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.124223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.124238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.124250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.792 [2024-10-13 01:46:45.124278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.792 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.134077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.134198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.134224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.134238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.134251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.792 [2024-10-13 01:46:45.134279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.792 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.144066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.144156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.144180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.144193] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.144206] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.792 [2024-10-13 01:46:45.144234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.792 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.154093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.154230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.154255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.154270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.154282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.792 [2024-10-13 01:46:45.154316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.792 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.164185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.164288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.164314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.164328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.164340] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.792 [2024-10-13 01:46:45.164369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.792 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.174184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.174284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.174310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.174325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.174337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.792 [2024-10-13 01:46:45.174365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.792 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.184279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.184369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.184394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.184408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.184420] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.792 [2024-10-13 01:46:45.184447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.792 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.194326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.194419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.194445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.194459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.194478] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.792 [2024-10-13 01:46:45.194508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.792 qpair failed and we were unable to recover it. 00:35:59.792 [2024-10-13 01:46:45.204340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.792 [2024-10-13 01:46:45.204435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.792 [2024-10-13 01:46:45.204464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.792 [2024-10-13 01:46:45.204487] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.792 [2024-10-13 01:46:45.204500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.204529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.214253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.214353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.214378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.214392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.214404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.214432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.224319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.224402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.224426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.224441] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.224453] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.224490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.234351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.234438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.234463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.234485] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.234499] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.234527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.244378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.244479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.244506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.244520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.244541] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.244574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.254378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.254477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.254502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.254516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.254528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.254557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.264409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.264507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.264530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.264545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.264557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.264586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.274445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.274547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.274573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.274587] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.274599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.274627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.284497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.284610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.284635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.284650] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.284662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.284690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.294563] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.294661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.294691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.294706] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.294718] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.294746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.304526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.304620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.304645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.304659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.304671] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.304700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.314555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.314647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.314672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.314687] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.314699] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.314726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.324628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.324729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.324755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.324769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.324781] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.324809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.334627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.793 [2024-10-13 01:46:45.334723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.793 [2024-10-13 01:46:45.334748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.793 [2024-10-13 01:46:45.334763] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.793 [2024-10-13 01:46:45.334775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.793 [2024-10-13 01:46:45.334808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.793 qpair failed and we were unable to recover it. 00:35:59.793 [2024-10-13 01:46:45.344638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.794 [2024-10-13 01:46:45.344730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.794 [2024-10-13 01:46:45.344765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.794 [2024-10-13 01:46:45.344781] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.794 [2024-10-13 01:46:45.344794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.794 [2024-10-13 01:46:45.344822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.794 qpair failed and we were unable to recover it. 00:35:59.794 [2024-10-13 01:46:45.354656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.794 [2024-10-13 01:46:45.354749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.794 [2024-10-13 01:46:45.354774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.794 [2024-10-13 01:46:45.354788] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.794 [2024-10-13 01:46:45.354800] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.794 [2024-10-13 01:46:45.354828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.794 qpair failed and we were unable to recover it. 00:35:59.794 [2024-10-13 01:46:45.364748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.794 [2024-10-13 01:46:45.364842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.794 [2024-10-13 01:46:45.364866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.794 [2024-10-13 01:46:45.364880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.794 [2024-10-13 01:46:45.364893] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:35:59.794 [2024-10-13 01:46:45.364920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:35:59.794 qpair failed and we were unable to recover it. 00:36:00.055 [2024-10-13 01:46:45.374732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.055 [2024-10-13 01:46:45.374823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.055 [2024-10-13 01:46:45.374853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.055 [2024-10-13 01:46:45.374867] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.055 [2024-10-13 01:46:45.374880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.055 [2024-10-13 01:46:45.374908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.055 qpair failed and we were unable to recover it. 00:36:00.055 [2024-10-13 01:46:45.384815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.055 [2024-10-13 01:46:45.384929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.055 [2024-10-13 01:46:45.384964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.055 [2024-10-13 01:46:45.384979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.055 [2024-10-13 01:46:45.384991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.055 [2024-10-13 01:46:45.385020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.055 qpair failed and we were unable to recover it. 00:36:00.055 [2024-10-13 01:46:45.394792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.055 [2024-10-13 01:46:45.394919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.055 [2024-10-13 01:46:45.394945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.055 [2024-10-13 01:46:45.394960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.055 [2024-10-13 01:46:45.394972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.055 [2024-10-13 01:46:45.395001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.055 qpair failed and we were unable to recover it. 00:36:00.055 [2024-10-13 01:46:45.404832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.055 [2024-10-13 01:46:45.404953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.055 [2024-10-13 01:46:45.404979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.055 [2024-10-13 01:46:45.404993] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.055 [2024-10-13 01:46:45.405005] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.055 [2024-10-13 01:46:45.405033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.055 qpair failed and we were unable to recover it. 00:36:00.055 [2024-10-13 01:46:45.414846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.055 [2024-10-13 01:46:45.414940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.055 [2024-10-13 01:46:45.414965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.055 [2024-10-13 01:46:45.414979] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.055 [2024-10-13 01:46:45.414991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.055 [2024-10-13 01:46:45.415019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.055 qpair failed and we were unable to recover it. 00:36:00.055 [2024-10-13 01:46:45.424854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.055 [2024-10-13 01:46:45.424945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.055 [2024-10-13 01:46:45.424970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.055 [2024-10-13 01:46:45.424985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.055 [2024-10-13 01:46:45.425002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.055 [2024-10-13 01:46:45.425031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.055 qpair failed and we were unable to recover it. 00:36:00.055 [2024-10-13 01:46:45.435007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.055 [2024-10-13 01:46:45.435129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.055 [2024-10-13 01:46:45.435155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.055 [2024-10-13 01:46:45.435169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.055 [2024-10-13 01:46:45.435182] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.055 [2024-10-13 01:46:45.435210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.055 qpair failed and we were unable to recover it. 00:36:00.055 [2024-10-13 01:46:45.444960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.055 [2024-10-13 01:46:45.445054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.055 [2024-10-13 01:46:45.445079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.055 [2024-10-13 01:46:45.445092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.055 [2024-10-13 01:46:45.445105] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.055 [2024-10-13 01:46:45.445132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.055 qpair failed and we were unable to recover it. 00:36:00.055 [2024-10-13 01:46:45.454941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.455031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.455054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.455068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.455080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.455109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.464965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.465046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.465070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.465084] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.465096] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.465124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.474997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.475096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.475122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.475137] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.475149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.475177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.485069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.485213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.485238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.485252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.485264] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.485292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.495048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.495142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.495168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.495182] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.495194] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.495221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.505121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.505211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.505235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.505249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.505261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.505290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.515108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.515203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.515228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.515243] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.515261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.515290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.525191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.525287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.525313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.525327] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.525339] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.525367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.535189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.535301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.535327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.535341] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.535353] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.535381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.545228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.545318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.545342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.545356] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.545368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.545396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.555235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.555325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.555349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.555362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.555374] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.555402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.565280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.565376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.565401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.565415] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.565427] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.565455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.575275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.575374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.575398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.575412] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.575424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.575451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.585318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.056 [2024-10-13 01:46:45.585401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.056 [2024-10-13 01:46:45.585425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.056 [2024-10-13 01:46:45.585439] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.056 [2024-10-13 01:46:45.585452] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.056 [2024-10-13 01:46:45.585487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.056 qpair failed and we were unable to recover it. 00:36:00.056 [2024-10-13 01:46:45.595339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.057 [2024-10-13 01:46:45.595420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.057 [2024-10-13 01:46:45.595444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.057 [2024-10-13 01:46:45.595458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.057 [2024-10-13 01:46:45.595475] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.057 [2024-10-13 01:46:45.595506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.057 qpair failed and we were unable to recover it. 00:36:00.057 [2024-10-13 01:46:45.605404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.057 [2024-10-13 01:46:45.605517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.057 [2024-10-13 01:46:45.605541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.057 [2024-10-13 01:46:45.605555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.057 [2024-10-13 01:46:45.605573] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.057 [2024-10-13 01:46:45.605602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.057 qpair failed and we were unable to recover it. 00:36:00.057 [2024-10-13 01:46:45.615395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.057 [2024-10-13 01:46:45.615527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.057 [2024-10-13 01:46:45.615552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.057 [2024-10-13 01:46:45.615565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.057 [2024-10-13 01:46:45.615578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.057 [2024-10-13 01:46:45.615606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.057 qpair failed and we were unable to recover it. 00:36:00.057 [2024-10-13 01:46:45.625462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.057 [2024-10-13 01:46:45.625565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.057 [2024-10-13 01:46:45.625589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.057 [2024-10-13 01:46:45.625604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.057 [2024-10-13 01:46:45.625617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.057 [2024-10-13 01:46:45.625645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.057 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.635571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.635659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.635684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.635698] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.317 [2024-10-13 01:46:45.635711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.317 [2024-10-13 01:46:45.635739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.317 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.645518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.645616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.645644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.645661] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.317 [2024-10-13 01:46:45.645673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.317 [2024-10-13 01:46:45.645703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.317 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.655524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.655654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.655679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.655694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.317 [2024-10-13 01:46:45.655706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.317 [2024-10-13 01:46:45.655735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.317 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.665582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.665700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.665725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.665739] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.317 [2024-10-13 01:46:45.665751] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.317 [2024-10-13 01:46:45.665779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.317 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.675614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.675742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.675767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.675782] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.317 [2024-10-13 01:46:45.675794] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.317 [2024-10-13 01:46:45.675822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.317 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.685635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.685730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.685756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.685770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.317 [2024-10-13 01:46:45.685782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.317 [2024-10-13 01:46:45.685809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.317 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.695681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.695810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.695834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.695848] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.317 [2024-10-13 01:46:45.695866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.317 [2024-10-13 01:46:45.695895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.317 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.705702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.705795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.705820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.705834] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.317 [2024-10-13 01:46:45.705847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.317 [2024-10-13 01:46:45.705874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.317 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.715698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.715831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.715855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.715869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.317 [2024-10-13 01:46:45.715881] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.317 [2024-10-13 01:46:45.715909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.317 qpair failed and we were unable to recover it. 00:36:00.317 [2024-10-13 01:46:45.725729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.317 [2024-10-13 01:46:45.725820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.317 [2024-10-13 01:46:45.725845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.317 [2024-10-13 01:46:45.725859] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.725871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.725899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.735851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.735940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.735964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.735978] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.735991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.736018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.745795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.745876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.745900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.745914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.745926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.745954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.755795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.755892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.755917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.755931] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.755943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.755971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.765842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.765938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.765963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.765977] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.765989] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.766017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.775843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.775936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.775960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.775974] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.775986] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.776014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.785907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.785995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.786023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.786044] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.786058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.786086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.795928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.796008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.796033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.796046] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.796058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.796087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.805945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.806065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.806090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.806104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.806116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.806143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.815975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.816070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.816095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.816108] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.816121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.816148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.826000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.826085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.826110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.826124] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.826137] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.826166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.836030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.836118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.836143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.836157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.836169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.836197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.846107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.846201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.846225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.846239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.846252] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.318 [2024-10-13 01:46:45.846279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.318 qpair failed and we were unable to recover it. 00:36:00.318 [2024-10-13 01:46:45.856102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.318 [2024-10-13 01:46:45.856190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.318 [2024-10-13 01:46:45.856213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.318 [2024-10-13 01:46:45.856227] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.318 [2024-10-13 01:46:45.856240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.319 [2024-10-13 01:46:45.856268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.319 qpair failed and we were unable to recover it. 00:36:00.319 [2024-10-13 01:46:45.866108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.319 [2024-10-13 01:46:45.866230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.319 [2024-10-13 01:46:45.866255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.319 [2024-10-13 01:46:45.866270] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.319 [2024-10-13 01:46:45.866282] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.319 [2024-10-13 01:46:45.866312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.319 qpair failed and we were unable to recover it. 00:36:00.319 [2024-10-13 01:46:45.876176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.319 [2024-10-13 01:46:45.876284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.319 [2024-10-13 01:46:45.876309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.319 [2024-10-13 01:46:45.876328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.319 [2024-10-13 01:46:45.876341] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.319 [2024-10-13 01:46:45.876370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.319 qpair failed and we were unable to recover it. 00:36:00.319 [2024-10-13 01:46:45.886215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.319 [2024-10-13 01:46:45.886309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.319 [2024-10-13 01:46:45.886334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.319 [2024-10-13 01:46:45.886348] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.319 [2024-10-13 01:46:45.886361] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.319 [2024-10-13 01:46:45.886389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.319 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.896248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.580 [2024-10-13 01:46:45.896376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.580 [2024-10-13 01:46:45.896403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.580 [2024-10-13 01:46:45.896417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.580 [2024-10-13 01:46:45.896429] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.580 [2024-10-13 01:46:45.896457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.580 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.906217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.580 [2024-10-13 01:46:45.906298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.580 [2024-10-13 01:46:45.906322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.580 [2024-10-13 01:46:45.906335] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.580 [2024-10-13 01:46:45.906347] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.580 [2024-10-13 01:46:45.906375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.580 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.916259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.580 [2024-10-13 01:46:45.916348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.580 [2024-10-13 01:46:45.916372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.580 [2024-10-13 01:46:45.916385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.580 [2024-10-13 01:46:45.916398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.580 [2024-10-13 01:46:45.916427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.580 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.926300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.580 [2024-10-13 01:46:45.926394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.580 [2024-10-13 01:46:45.926419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.580 [2024-10-13 01:46:45.926433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.580 [2024-10-13 01:46:45.926445] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.580 [2024-10-13 01:46:45.926484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.580 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.936336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.580 [2024-10-13 01:46:45.936467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.580 [2024-10-13 01:46:45.936502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.580 [2024-10-13 01:46:45.936517] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.580 [2024-10-13 01:46:45.936529] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.580 [2024-10-13 01:46:45.936558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.580 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.946359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.580 [2024-10-13 01:46:45.946450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.580 [2024-10-13 01:46:45.946483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.580 [2024-10-13 01:46:45.946499] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.580 [2024-10-13 01:46:45.946512] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.580 [2024-10-13 01:46:45.946541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.580 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.956412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.580 [2024-10-13 01:46:45.956508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.580 [2024-10-13 01:46:45.956544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.580 [2024-10-13 01:46:45.956558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.580 [2024-10-13 01:46:45.956570] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.580 [2024-10-13 01:46:45.956598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.580 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.966410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.580 [2024-10-13 01:46:45.966505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.580 [2024-10-13 01:46:45.966539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.580 [2024-10-13 01:46:45.966558] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.580 [2024-10-13 01:46:45.966571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.580 [2024-10-13 01:46:45.966599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.580 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.976424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.580 [2024-10-13 01:46:45.976522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.580 [2024-10-13 01:46:45.976547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.580 [2024-10-13 01:46:45.976561] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.580 [2024-10-13 01:46:45.976574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.580 [2024-10-13 01:46:45.976601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.580 qpair failed and we were unable to recover it. 00:36:00.580 [2024-10-13 01:46:45.986487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:45.986610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:45.986634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:45.986648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:45.986660] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:45.986689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:45.996493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:45.996582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:45.996607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:45.996620] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:45.996632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:45.996660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.006576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.006714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.006738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.006752] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.006765] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.006793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.016550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.016639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.016663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.016677] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.016690] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.016718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.026564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.026650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.026675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.026689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.026701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.026730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.036604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.036686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.036711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.036726] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.036738] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.036766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.046645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.046737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.046762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.046777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.046789] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.046817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.056704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.056795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.056819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.056838] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.056852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.056880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.066705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.066797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.066821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.066835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.066847] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.066875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.076709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.076791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.076816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.076830] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.076842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.076869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.086805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.086901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.086926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.086941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.086953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.086981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.096775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.096864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.096888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.096902] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.096914] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.581 [2024-10-13 01:46:46.096942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.581 qpair failed and we were unable to recover it. 00:36:00.581 [2024-10-13 01:46:46.106799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.581 [2024-10-13 01:46:46.106882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.581 [2024-10-13 01:46:46.106907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.581 [2024-10-13 01:46:46.106920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.581 [2024-10-13 01:46:46.106933] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.582 [2024-10-13 01:46:46.106961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.582 qpair failed and we were unable to recover it. 00:36:00.582 [2024-10-13 01:46:46.116846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.582 [2024-10-13 01:46:46.116930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.582 [2024-10-13 01:46:46.116955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.582 [2024-10-13 01:46:46.116969] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.582 [2024-10-13 01:46:46.116982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.582 [2024-10-13 01:46:46.117010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.582 qpair failed and we were unable to recover it. 00:36:00.582 [2024-10-13 01:46:46.126861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.582 [2024-10-13 01:46:46.126951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.582 [2024-10-13 01:46:46.126975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.582 [2024-10-13 01:46:46.126989] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.582 [2024-10-13 01:46:46.127001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.582 [2024-10-13 01:46:46.127029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.582 qpair failed and we were unable to recover it. 00:36:00.582 [2024-10-13 01:46:46.136928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.582 [2024-10-13 01:46:46.137021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.582 [2024-10-13 01:46:46.137045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.582 [2024-10-13 01:46:46.137059] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.582 [2024-10-13 01:46:46.137072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.582 [2024-10-13 01:46:46.137100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.582 qpair failed and we were unable to recover it. 00:36:00.582 [2024-10-13 01:46:46.146943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.582 [2024-10-13 01:46:46.147027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.582 [2024-10-13 01:46:46.147059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.582 [2024-10-13 01:46:46.147076] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.582 [2024-10-13 01:46:46.147088] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.582 [2024-10-13 01:46:46.147117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.582 qpair failed and we were unable to recover it. 00:36:00.582 [2024-10-13 01:46:46.156932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.582 [2024-10-13 01:46:46.157014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.582 [2024-10-13 01:46:46.157042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.582 [2024-10-13 01:46:46.157057] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.582 [2024-10-13 01:46:46.157069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.582 [2024-10-13 01:46:46.157106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.582 qpair failed and we were unable to recover it. 00:36:00.843 [2024-10-13 01:46:46.166977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.843 [2024-10-13 01:46:46.167115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.843 [2024-10-13 01:46:46.167142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.843 [2024-10-13 01:46:46.167157] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.843 [2024-10-13 01:46:46.167171] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.843 [2024-10-13 01:46:46.167199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.843 qpair failed and we were unable to recover it. 00:36:00.843 [2024-10-13 01:46:46.177014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.843 [2024-10-13 01:46:46.177096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.843 [2024-10-13 01:46:46.177121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.843 [2024-10-13 01:46:46.177135] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.843 [2024-10-13 01:46:46.177148] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.843 [2024-10-13 01:46:46.177176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.843 qpair failed and we were unable to recover it. 00:36:00.843 [2024-10-13 01:46:46.187067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.843 [2024-10-13 01:46:46.187151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.843 [2024-10-13 01:46:46.187176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.843 [2024-10-13 01:46:46.187190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.843 [2024-10-13 01:46:46.187203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.843 [2024-10-13 01:46:46.187232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.843 qpair failed and we were unable to recover it. 00:36:00.843 [2024-10-13 01:46:46.197155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.843 [2024-10-13 01:46:46.197271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.843 [2024-10-13 01:46:46.197297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.843 [2024-10-13 01:46:46.197312] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.843 [2024-10-13 01:46:46.197324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.843 [2024-10-13 01:46:46.197352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.843 qpair failed and we were unable to recover it. 00:36:00.843 [2024-10-13 01:46:46.207085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.843 [2024-10-13 01:46:46.207177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.843 [2024-10-13 01:46:46.207201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.843 [2024-10-13 01:46:46.207215] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.843 [2024-10-13 01:46:46.207227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.843 [2024-10-13 01:46:46.207255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.843 qpair failed and we were unable to recover it. 00:36:00.843 [2024-10-13 01:46:46.217109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.843 [2024-10-13 01:46:46.217192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.843 [2024-10-13 01:46:46.217216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.843 [2024-10-13 01:46:46.217230] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.843 [2024-10-13 01:46:46.217242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.843 [2024-10-13 01:46:46.217270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.843 qpair failed and we were unable to recover it. 00:36:00.843 [2024-10-13 01:46:46.227147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.843 [2024-10-13 01:46:46.227238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.843 [2024-10-13 01:46:46.227267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.843 [2024-10-13 01:46:46.227281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.843 [2024-10-13 01:46:46.227293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.843 [2024-10-13 01:46:46.227321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.843 qpair failed and we were unable to recover it. 00:36:00.843 [2024-10-13 01:46:46.237159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.843 [2024-10-13 01:46:46.237258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.843 [2024-10-13 01:46:46.237287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.843 [2024-10-13 01:46:46.237303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.843 [2024-10-13 01:46:46.237316] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.843 [2024-10-13 01:46:46.237344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.843 qpair failed and we were unable to recover it. 00:36:00.843 [2024-10-13 01:46:46.247204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.843 [2024-10-13 01:46:46.247317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.843 [2024-10-13 01:46:46.247341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.843 [2024-10-13 01:46:46.247355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.843 [2024-10-13 01:46:46.247367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.843 [2024-10-13 01:46:46.247396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.843 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.257232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.257355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.257380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.257393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.257406] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.844 [2024-10-13 01:46:46.257434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.267255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.267359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.267383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.267396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.267409] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.844 [2024-10-13 01:46:46.267437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.277323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.277426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.277450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.277463] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.277484] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.844 [2024-10-13 01:46:46.277520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.287352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.287447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.287480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.287497] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.287510] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.844 [2024-10-13 01:46:46.287539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.297333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.297424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.297448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.297462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.297485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.844 [2024-10-13 01:46:46.297514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.307381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.307464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.307496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.307510] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.307523] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.844 [2024-10-13 01:46:46.307550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.317383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.317475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.317501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.317515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.317528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.844 [2024-10-13 01:46:46.317556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.327420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.327520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.327550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.327566] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.327578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d44b60 00:36:00.844 [2024-10-13 01:46:46.327607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.337463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.337566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.337598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.337613] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.337626] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4834000b90 00:36:00.844 [2024-10-13 01:46:46.337658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.347498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.347585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.347612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.347627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.347639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4834000b90 00:36:00.844 [2024-10-13 01:46:46.347669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.357541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.357636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.357667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.357682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.357694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4830000b90 00:36:00.844 [2024-10-13 01:46:46.357738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.367558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.844 [2024-10-13 01:46:46.367655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.844 [2024-10-13 01:46:46.367681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.844 [2024-10-13 01:46:46.367695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.844 [2024-10-13 01:46:46.367707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4830000b90 00:36:00.844 [2024-10-13 01:46:46.367743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:36:00.844 qpair failed and we were unable to recover it. 00:36:00.844 [2024-10-13 01:46:46.367889] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:36:00.844 A controller has encountered a failure and is being reset. 00:36:00.844 [2024-10-13 01:46:46.377579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.845 [2024-10-13 01:46:46.377667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.845 [2024-10-13 01:46:46.377697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.845 [2024-10-13 01:46:46.377712] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.845 [2024-10-13 01:46:46.377726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:36:00.845 [2024-10-13 01:46:46.377757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.845 qpair failed and we were unable to recover it. 00:36:00.845 [2024-10-13 01:46:46.387595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.845 [2024-10-13 01:46:46.387730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.845 [2024-10-13 01:46:46.387756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.845 [2024-10-13 01:46:46.387770] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.845 [2024-10-13 01:46:46.387783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f483c000b90 00:36:00.845 [2024-10-13 01:46:46.387812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:36:00.845 qpair failed and we were unable to recover it. 00:36:01.104 Controller properly reset. 00:36:01.104 Initializing NVMe Controllers 00:36:01.104 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:01.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:01.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:01.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:01.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:01.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:01.104 Initialization complete. Launching workers. 00:36:01.104 Starting thread on core 1 00:36:01.104 Starting thread on core 2 00:36:01.104 Starting thread on core 3 00:36:01.104 Starting thread on core 0 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:01.104 00:36:01.104 real 0m10.722s 00:36:01.104 user 0m19.073s 00:36:01.104 sys 0m5.104s 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:01.104 ************************************ 00:36:01.104 END TEST nvmf_target_disconnect_tc2 00:36:01.104 ************************************ 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:01.104 rmmod nvme_tcp 00:36:01.104 rmmod nvme_fabrics 00:36:01.104 rmmod nvme_keyring 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 1763772 ']' 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 1763772 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1763772 ']' 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1763772 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1763772 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1763772' 00:36:01.104 killing process with pid 1763772 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1763772 00:36:01.104 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1763772 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:01.362 01:46:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.267 01:46:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:03.267 00:36:03.267 real 0m15.544s 00:36:03.267 user 0m45.294s 00:36:03.267 sys 0m7.132s 00:36:03.267 01:46:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.267 01:46:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:03.267 ************************************ 00:36:03.267 END TEST nvmf_target_disconnect 00:36:03.267 ************************************ 00:36:03.267 01:46:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:03.267 00:36:03.267 real 6m42.667s 00:36:03.267 user 17m15.563s 00:36:03.267 sys 1m24.370s 00:36:03.267 01:46:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:03.267 01:46:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:03.526 ************************************ 00:36:03.526 END TEST nvmf_host 00:36:03.526 ************************************ 00:36:03.526 01:46:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:03.526 01:46:48 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:03.526 01:46:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:03.526 01:46:48 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:03.526 01:46:48 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.526 01:46:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.526 ************************************ 00:36:03.526 START TEST nvmf_target_core_interrupt_mode 00:36:03.526 ************************************ 00:36:03.526 01:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:03.526 * Looking for test storage... 00:36:03.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:03.526 01:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:03.526 01:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:36:03.526 01:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:03.526 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:03.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.526 --rc genhtml_branch_coverage=1 00:36:03.526 --rc genhtml_function_coverage=1 00:36:03.526 --rc genhtml_legend=1 00:36:03.526 --rc geninfo_all_blocks=1 00:36:03.526 --rc geninfo_unexecuted_blocks=1 00:36:03.526 00:36:03.526 ' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:03.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.527 --rc genhtml_branch_coverage=1 00:36:03.527 --rc genhtml_function_coverage=1 00:36:03.527 --rc genhtml_legend=1 00:36:03.527 --rc geninfo_all_blocks=1 00:36:03.527 --rc geninfo_unexecuted_blocks=1 00:36:03.527 00:36:03.527 ' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:03.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.527 --rc genhtml_branch_coverage=1 00:36:03.527 --rc genhtml_function_coverage=1 00:36:03.527 --rc genhtml_legend=1 00:36:03.527 --rc geninfo_all_blocks=1 00:36:03.527 --rc geninfo_unexecuted_blocks=1 00:36:03.527 00:36:03.527 ' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:03.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.527 --rc genhtml_branch_coverage=1 00:36:03.527 --rc genhtml_function_coverage=1 00:36:03.527 --rc genhtml_legend=1 00:36:03.527 --rc geninfo_all_blocks=1 00:36:03.527 --rc geninfo_unexecuted_blocks=1 00:36:03.527 00:36:03.527 ' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:03.527 ************************************ 00:36:03.527 START TEST nvmf_abort 00:36:03.527 ************************************ 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:03.527 * Looking for test storage... 00:36:03.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:36:03.527 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:03.786 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:03.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.787 --rc genhtml_branch_coverage=1 00:36:03.787 --rc genhtml_function_coverage=1 00:36:03.787 --rc genhtml_legend=1 00:36:03.787 --rc geninfo_all_blocks=1 00:36:03.787 --rc geninfo_unexecuted_blocks=1 00:36:03.787 00:36:03.787 ' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:03.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.787 --rc genhtml_branch_coverage=1 00:36:03.787 --rc genhtml_function_coverage=1 00:36:03.787 --rc genhtml_legend=1 00:36:03.787 --rc geninfo_all_blocks=1 00:36:03.787 --rc geninfo_unexecuted_blocks=1 00:36:03.787 00:36:03.787 ' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:03.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.787 --rc genhtml_branch_coverage=1 00:36:03.787 --rc genhtml_function_coverage=1 00:36:03.787 --rc genhtml_legend=1 00:36:03.787 --rc geninfo_all_blocks=1 00:36:03.787 --rc geninfo_unexecuted_blocks=1 00:36:03.787 00:36:03.787 ' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:03.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:03.787 --rc genhtml_branch_coverage=1 00:36:03.787 --rc genhtml_function_coverage=1 00:36:03.787 --rc genhtml_legend=1 00:36:03.787 --rc geninfo_all_blocks=1 00:36:03.787 --rc geninfo_unexecuted_blocks=1 00:36:03.787 00:36:03.787 ' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:03.787 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:03.788 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:03.788 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:03.788 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:03.788 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:03.788 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:03.788 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:03.788 01:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:05.691 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:05.692 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:05.692 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:05.692 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:05.692 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:05.692 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:05.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:36:05.951 00:36:05.951 --- 10.0.0.2 ping statistics --- 00:36:05.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.951 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:05.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:36:05.951 00:36:05.951 --- 10.0.0.1 ping statistics --- 00:36:05.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.951 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1766547 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1766547 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1766547 ']' 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:05.951 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:05.951 [2024-10-13 01:46:51.521634] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:05.951 [2024-10-13 01:46:51.522730] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:36:05.951 [2024-10-13 01:46:51.522800] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.209 [2024-10-13 01:46:51.588924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:06.209 [2024-10-13 01:46:51.635666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.209 [2024-10-13 01:46:51.635719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.209 [2024-10-13 01:46:51.635733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.210 [2024-10-13 01:46:51.635744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.210 [2024-10-13 01:46:51.635753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.210 [2024-10-13 01:46:51.637368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:06.210 [2024-10-13 01:46:51.637434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:06.210 [2024-10-13 01:46:51.637437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.210 [2024-10-13 01:46:51.719740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:06.210 [2024-10-13 01:46:51.719962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:06.210 [2024-10-13 01:46:51.719969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:06.210 [2024-10-13 01:46:51.720219] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:06.210 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:06.210 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:36:06.210 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:06.210 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:06.210 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.210 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:06.210 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:06.210 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.210 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.210 [2024-10-13 01:46:51.782201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.468 Malloc0 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.468 Delay0 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.468 [2024-10-13 01:46:51.858321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:06.468 01:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:06.468 [2024-10-13 01:46:51.951496] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:09.006 Initializing NVMe Controllers 00:36:09.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:09.006 controller IO queue size 128 less than required 00:36:09.006 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:09.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:09.006 Initialization complete. Launching workers. 00:36:09.006 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27886 00:36:09.006 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27943, failed to submit 66 00:36:09.006 success 27886, unsuccessful 57, failed 0 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:09.006 01:46:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:09.006 rmmod nvme_tcp 00:36:09.006 rmmod nvme_fabrics 00:36:09.006 rmmod nvme_keyring 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1766547 ']' 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1766547 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1766547 ']' 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1766547 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1766547 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1766547' 00:36:09.006 killing process with pid 1766547 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1766547 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1766547 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:09.006 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:09.007 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:36:09.007 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:09.007 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:36:09.007 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:09.007 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:09.007 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.007 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:09.007 01:46:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:10.917 00:36:10.917 real 0m7.275s 00:36:10.917 user 0m8.933s 00:36:10.917 sys 0m2.883s 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:10.917 ************************************ 00:36:10.917 END TEST nvmf_abort 00:36:10.917 ************************************ 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:10.917 ************************************ 00:36:10.917 START TEST nvmf_ns_hotplug_stress 00:36:10.917 ************************************ 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:10.917 * Looking for test storage... 00:36:10.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:36:10.917 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:11.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.176 --rc genhtml_branch_coverage=1 00:36:11.176 --rc genhtml_function_coverage=1 00:36:11.176 --rc genhtml_legend=1 00:36:11.176 --rc geninfo_all_blocks=1 00:36:11.176 --rc geninfo_unexecuted_blocks=1 00:36:11.176 00:36:11.176 ' 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:11.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.176 --rc genhtml_branch_coverage=1 00:36:11.176 --rc genhtml_function_coverage=1 00:36:11.176 --rc genhtml_legend=1 00:36:11.176 --rc geninfo_all_blocks=1 00:36:11.176 --rc geninfo_unexecuted_blocks=1 00:36:11.176 00:36:11.176 ' 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:11.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.176 --rc genhtml_branch_coverage=1 00:36:11.176 --rc genhtml_function_coverage=1 00:36:11.176 --rc genhtml_legend=1 00:36:11.176 --rc geninfo_all_blocks=1 00:36:11.176 --rc geninfo_unexecuted_blocks=1 00:36:11.176 00:36:11.176 ' 00:36:11.176 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:11.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.176 --rc genhtml_branch_coverage=1 00:36:11.176 --rc genhtml_function_coverage=1 00:36:11.176 --rc genhtml_legend=1 00:36:11.176 --rc geninfo_all_blocks=1 00:36:11.176 --rc geninfo_unexecuted_blocks=1 00:36:11.176 00:36:11.176 ' 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:11.177 01:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:13.096 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:13.096 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:13.096 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:13.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:13.097 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:13.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:13.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:36:13.097 00:36:13.097 --- 10.0.0.2 ping statistics --- 00:36:13.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.097 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:13.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:13.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:36:13.097 00:36:13.097 --- 10.0.0.1 ping statistics --- 00:36:13.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:13.097 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1768876 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1768876 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1768876 ']' 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:13.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:13.097 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:13.358 [2024-10-13 01:46:58.707069] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:13.358 [2024-10-13 01:46:58.708145] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:36:13.358 [2024-10-13 01:46:58.708217] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:13.358 [2024-10-13 01:46:58.772158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:13.358 [2024-10-13 01:46:58.818745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:13.358 [2024-10-13 01:46:58.818798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:13.358 [2024-10-13 01:46:58.818813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:13.358 [2024-10-13 01:46:58.818824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:13.358 [2024-10-13 01:46:58.818845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:13.358 [2024-10-13 01:46:58.820406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:13.358 [2024-10-13 01:46:58.820479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:13.358 [2024-10-13 01:46:58.820480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:13.358 [2024-10-13 01:46:58.912787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:13.358 [2024-10-13 01:46:58.912995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:13.358 [2024-10-13 01:46:58.912996] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:13.358 [2024-10-13 01:46:58.913266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:13.618 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:13.618 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:36:13.618 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:36:13.618 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:13.618 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:13.618 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:13.618 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:13.618 01:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:13.878 [2024-10-13 01:46:59.221255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:13.878 01:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:14.137 01:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:14.396 [2024-10-13 01:46:59.773483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:14.396 01:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:14.654 01:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:14.912 Malloc0 00:36:14.912 01:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:15.171 Delay0 00:36:15.171 01:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:15.430 01:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:15.689 NULL1 00:36:15.689 01:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:15.947 01:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1769177 00:36:15.947 01:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:15.947 01:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:15.947 01:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:16.207 01:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.773 01:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:16.773 01:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:16.773 true 00:36:17.031 01:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:17.031 01:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:17.289 01:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:17.548 01:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:17.548 01:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:17.806 true 00:36:17.806 01:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:17.806 01:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:18.063 01:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:18.321 01:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:18.322 01:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:18.580 true 00:36:18.580 01:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:18.580 01:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:19.514 Read completed with error (sct=0, sc=11) 00:36:19.514 01:47:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:19.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.514 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:19.514 01:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:19.514 01:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:19.773 true 00:36:20.033 01:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:20.033 01:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:20.294 01:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.553 01:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:20.553 01:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:20.811 true 00:36:20.811 01:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:20.811 01:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:21.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:21.635 01:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:21.894 01:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:21.894 01:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:22.152 true 00:36:22.152 01:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:22.152 01:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:22.410 01:47:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:22.668 01:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:22.668 01:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:22.926 true 00:36:22.926 01:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:22.926 01:47:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:23.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:23.859 01:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:23.859 01:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:23.859 01:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:24.117 true 00:36:24.117 01:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:24.117 01:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:24.376 01:47:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:24.942 01:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:24.942 01:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:24.942 true 00:36:24.942 01:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:24.942 01:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.200 01:47:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.766 01:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:25.766 01:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:25.766 true 00:36:25.766 01:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:25.766 01:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:27.138 01:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.138 01:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:27.138 01:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:27.396 true 00:36:27.396 01:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:27.396 01:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.653 01:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.911 01:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:27.911 01:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:28.168 true 00:36:28.168 01:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:28.168 01:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.426 01:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.684 01:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:28.684 01:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:28.941 true 00:36:28.941 01:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:28.941 01:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.313 01:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.313 01:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:30.313 01:47:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:30.570 true 00:36:30.570 01:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:30.570 01:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.827 01:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.085 01:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:31.085 01:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:31.358 true 00:36:31.358 01:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:31.358 01:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.659 01:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.936 01:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:31.936 01:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:32.194 true 00:36:32.194 01:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:32.194 01:47:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.124 01:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:33.381 01:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:33.381 01:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:33.639 true 00:36:33.639 01:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:33.639 01:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.897 01:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.154 01:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:34.154 01:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:34.412 true 00:36:34.412 01:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:34.412 01:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.670 01:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.235 01:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:35.235 01:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:35.235 true 00:36:35.235 01:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:35.235 01:47:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:36.168 01:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.426 01:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:36.426 01:47:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:36.683 true 00:36:36.941 01:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:36.941 01:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.199 01:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.457 01:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:37.457 01:47:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:37.457 true 00:36:37.715 01:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:37.715 01:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.972 01:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.230 01:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:38.230 01:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:38.488 true 00:36:38.488 01:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:38.488 01:47:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.421 01:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.421 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:39.678 01:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:39.678 01:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:39.936 true 00:36:39.936 01:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:39.936 01:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.193 01:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.451 01:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:40.451 01:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:40.709 true 00:36:40.709 01:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:40.709 01:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.966 01:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.223 01:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:41.223 01:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:41.481 true 00:36:41.481 01:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:41.481 01:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.413 01:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.671 01:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:42.671 01:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:42.928 true 00:36:43.186 01:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:43.186 01:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.443 01:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.701 01:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:43.701 01:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:43.958 true 00:36:43.958 01:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:43.958 01:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.216 01:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.474 01:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:44.474 01:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:44.731 true 00:36:44.731 01:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:44.731 01:47:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.662 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:45.662 01:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.920 01:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:45.920 01:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:46.177 true 00:36:46.177 01:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:46.177 01:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.434 Initializing NVMe Controllers 00:36:46.434 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:46.434 Controller IO queue size 128, less than required. 00:36:46.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:46.434 Controller IO queue size 128, less than required. 00:36:46.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:46.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:46.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:46.434 Initialization complete. Launching workers. 00:36:46.434 ======================================================== 00:36:46.435 Latency(us) 00:36:46.435 Device Information : IOPS MiB/s Average min max 00:36:46.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 438.08 0.21 108242.48 3525.65 1013088.56 00:36:46.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7853.77 3.83 16249.05 1539.37 455659.33 00:36:46.435 ======================================================== 00:36:46.435 Total : 8291.84 4.05 21109.27 1539.37 1013088.56 00:36:46.435 00:36:46.435 01:47:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.692 01:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:36:46.692 01:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:36:46.950 true 00:36:46.950 01:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1769177 00:36:46.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1769177) - No such process 00:36:46.950 01:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1769177 00:36:46.950 01:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.207 01:47:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:47.465 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:47.465 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:47.465 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:47.465 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:47.465 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:47.723 null0 00:36:47.723 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:47.723 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:47.723 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:48.289 null1 00:36:48.289 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.289 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.289 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:48.289 null2 00:36:48.547 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.547 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.547 01:47:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:48.805 null3 00:36:48.805 01:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:48.805 01:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:48.805 01:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:49.064 null4 00:36:49.064 01:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.064 01:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.064 01:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:49.322 null5 00:36:49.322 01:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.322 01:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.322 01:47:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:49.580 null6 00:36:49.580 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.580 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.580 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:49.837 null7 00:36:49.837 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1773192 1773193 1773195 1773197 1773199 1773201 1773203 1773205 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:49.838 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.095 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:50.095 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.095 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.095 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.095 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.095 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.095 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.095 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:50.660 01:47:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:50.918 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:50.918 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.918 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:50.918 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:50.918 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:50.918 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:50.918 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:50.918 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.176 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:51.435 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.435 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.435 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.435 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.435 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.435 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.435 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.435 01:47:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:51.693 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:51.951 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:51.951 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.951 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:51.951 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:51.951 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:51.951 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:51.951 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:51.951 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.210 01:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:52.775 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:52.775 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.776 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:52.776 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:52.776 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:52.776 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:52.776 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:52.776 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:52.776 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:52.776 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:52.776 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:53.033 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.033 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.033 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:53.033 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.034 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:53.292 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:53.292 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.292 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:53.292 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:53.292 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:53.292 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:53.292 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:53.292 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:53.550 01:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:53.809 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.809 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:53.809 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:53.809 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:53.809 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:53.809 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:53.809 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:53.809 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.067 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.325 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:54.325 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:54.325 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:54.325 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.326 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:54.326 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:54.326 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:54.326 01:47:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:54.583 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.583 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.584 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:54.584 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.584 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.584 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.841 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.842 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:54.842 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.842 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.842 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:54.842 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:54.842 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:54.842 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:55.099 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:55.099 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:55.099 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:55.099 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:55.099 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:55.099 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:55.099 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:55.099 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.358 01:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:55.616 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:55.616 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:55.616 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:55.616 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:55.616 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:55.616 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:55.616 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:55.616 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:55.874 rmmod nvme_tcp 00:36:55.874 rmmod nvme_fabrics 00:36:55.874 rmmod nvme_keyring 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1768876 ']' 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1768876 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1768876 ']' 00:36:55.874 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1768876 00:36:55.875 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:36:55.875 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:55.875 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1768876 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1768876' 00:36:56.133 killing process with pid 1768876 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1768876 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1768876 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:56.133 01:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:58.689 00:36:58.689 real 0m47.337s 00:36:58.689 user 3m19.029s 00:36:58.689 sys 0m21.768s 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:58.689 ************************************ 00:36:58.689 END TEST nvmf_ns_hotplug_stress 00:36:58.689 ************************************ 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:58.689 ************************************ 00:36:58.689 START TEST nvmf_delete_subsystem 00:36:58.689 ************************************ 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:58.689 * Looking for test storage... 00:36:58.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.689 --rc genhtml_branch_coverage=1 00:36:58.689 --rc genhtml_function_coverage=1 00:36:58.689 --rc genhtml_legend=1 00:36:58.689 --rc geninfo_all_blocks=1 00:36:58.689 --rc geninfo_unexecuted_blocks=1 00:36:58.689 00:36:58.689 ' 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.689 --rc genhtml_branch_coverage=1 00:36:58.689 --rc genhtml_function_coverage=1 00:36:58.689 --rc genhtml_legend=1 00:36:58.689 --rc geninfo_all_blocks=1 00:36:58.689 --rc geninfo_unexecuted_blocks=1 00:36:58.689 00:36:58.689 ' 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.689 --rc genhtml_branch_coverage=1 00:36:58.689 --rc genhtml_function_coverage=1 00:36:58.689 --rc genhtml_legend=1 00:36:58.689 --rc geninfo_all_blocks=1 00:36:58.689 --rc geninfo_unexecuted_blocks=1 00:36:58.689 00:36:58.689 ' 00:36:58.689 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.689 --rc genhtml_branch_coverage=1 00:36:58.689 --rc genhtml_function_coverage=1 00:36:58.689 --rc genhtml_legend=1 00:36:58.689 --rc geninfo_all_blocks=1 00:36:58.689 --rc geninfo_unexecuted_blocks=1 00:36:58.689 00:36:58.689 ' 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:58.690 01:47:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:00.611 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:00.611 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:00.611 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:00.612 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:00.612 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:00.612 01:47:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:00.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:00.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:37:00.612 00:37:00.612 --- 10.0.0.2 ping statistics --- 00:37:00.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:00.612 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:00.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:00.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:37:00.612 00:37:00.612 --- 10.0.0.1 ping statistics --- 00:37:00.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:00.612 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1776064 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1776064 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1776064 ']' 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:00.612 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.612 [2024-10-13 01:47:46.121629] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:00.612 [2024-10-13 01:47:46.122694] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:37:00.612 [2024-10-13 01:47:46.122745] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.612 [2024-10-13 01:47:46.187642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:00.871 [2024-10-13 01:47:46.235171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.871 [2024-10-13 01:47:46.235243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.871 [2024-10-13 01:47:46.235270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.871 [2024-10-13 01:47:46.235283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.871 [2024-10-13 01:47:46.235294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.871 [2024-10-13 01:47:46.236690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.871 [2024-10-13 01:47:46.236697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.871 [2024-10-13 01:47:46.325016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:00.871 [2024-10-13 01:47:46.325063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:00.871 [2024-10-13 01:47:46.325324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.871 [2024-10-13 01:47:46.377410] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.871 [2024-10-13 01:47:46.397666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.871 NULL1 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.871 Delay0 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1776087 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:00.871 01:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:01.130 [2024-10-13 01:47:46.469659] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:03.032 01:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:03.032 01:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:03.032 01:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 [2024-10-13 01:47:48.630137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4c4b0 is same with the state(6) to be set 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 starting I/O failed: -6 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 [2024-10-13 01:47:48.630907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2900000c00 is same with the state(6) to be set 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Read completed with error (sct=0, sc=8) 00:37:03.290 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Read completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 Write completed with error (sct=0, sc=8) 00:37:03.291 [2024-10-13 01:47:48.631478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4cbd0 is same with the state(6) to be set 00:37:04.224 [2024-10-13 01:47:49.609592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a670 is same with the state(6) to be set 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 [2024-10-13 01:47:49.631521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f290000d640 is same with the state(6) to be set 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 [2024-10-13 01:47:49.635388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4c8a0 is same with the state(6) to be set 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 [2024-10-13 01:47:49.635583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4cf00 is same with the state(6) to be set 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Write completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 Read completed with error (sct=0, sc=8) 00:37:04.224 [2024-10-13 01:47:49.635703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f290000cfe0 is same with the state(6) to be set 00:37:04.224 Initializing NVMe Controllers 00:37:04.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:04.224 Controller IO queue size 128, less than required. 00:37:04.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:04.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:04.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:04.224 Initialization complete. Launching workers. 00:37:04.224 ======================================================== 00:37:04.224 Latency(us) 00:37:04.224 Device Information : IOPS MiB/s Average min max 00:37:04.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.84 0.08 901705.29 1335.84 1012429.91 00:37:04.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.47 0.07 973095.28 558.35 2001473.08 00:37:04.225 ======================================================== 00:37:04.225 Total : 316.31 0.15 935439.02 558.35 2001473.08 00:37:04.225 00:37:04.225 [2024-10-13 01:47:49.636931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5a670 (9): Bad file descriptor 00:37:04.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:04.225 01:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.225 01:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:04.225 01:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1776087 00:37:04.225 01:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1776087 00:37:04.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1776087) - No such process 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1776087 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1776087 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1776087 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.790 [2024-10-13 01:47:50.157638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1776492 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1776492 00:37:04.790 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:04.790 [2024-10-13 01:47:50.209686] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:05.357 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:05.357 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1776492 00:37:05.357 01:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:05.615 01:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:05.615 01:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1776492 00:37:05.615 01:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:06.180 01:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:06.180 01:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1776492 00:37:06.180 01:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:06.745 01:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:06.745 01:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1776492 00:37:06.745 01:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:07.310 01:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:07.310 01:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1776492 00:37:07.310 01:47:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:07.875 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:07.875 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1776492 00:37:07.875 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:08.135 Initializing NVMe Controllers 00:37:08.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:08.135 Controller IO queue size 128, less than required. 00:37:08.135 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:08.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:08.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:08.135 Initialization complete. Launching workers. 00:37:08.135 ======================================================== 00:37:08.135 Latency(us) 00:37:08.135 Device Information : IOPS MiB/s Average min max 00:37:08.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003565.41 1000199.04 1011708.16 00:37:08.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005336.28 1000190.63 1041688.84 00:37:08.135 ======================================================== 00:37:08.135 Total : 256.00 0.12 1004450.84 1000190.63 1041688.84 00:37:08.135 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1776492 00:37:08.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1776492) - No such process 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1776492 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:08.135 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:08.135 rmmod nvme_tcp 00:37:08.135 rmmod nvme_fabrics 00:37:08.394 rmmod nvme_keyring 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1776064 ']' 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1776064 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1776064 ']' 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1776064 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1776064 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1776064' 00:37:08.394 killing process with pid 1776064 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1776064 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1776064 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:08.394 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:08.653 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:37:08.653 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:08.653 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:37:08.653 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:08.653 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:08.653 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.653 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:08.653 01:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:10.557 00:37:10.557 real 0m12.254s 00:37:10.557 user 0m24.533s 00:37:10.557 sys 0m3.923s 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:10.557 ************************************ 00:37:10.557 END TEST nvmf_delete_subsystem 00:37:10.557 ************************************ 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:10.557 ************************************ 00:37:10.557 START TEST nvmf_host_management 00:37:10.557 ************************************ 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:10.557 * Looking for test storage... 00:37:10.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:37:10.557 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.816 --rc genhtml_branch_coverage=1 00:37:10.816 --rc genhtml_function_coverage=1 00:37:10.816 --rc genhtml_legend=1 00:37:10.816 --rc geninfo_all_blocks=1 00:37:10.816 --rc geninfo_unexecuted_blocks=1 00:37:10.816 00:37:10.816 ' 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.816 --rc genhtml_branch_coverage=1 00:37:10.816 --rc genhtml_function_coverage=1 00:37:10.816 --rc genhtml_legend=1 00:37:10.816 --rc geninfo_all_blocks=1 00:37:10.816 --rc geninfo_unexecuted_blocks=1 00:37:10.816 00:37:10.816 ' 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.816 --rc genhtml_branch_coverage=1 00:37:10.816 --rc genhtml_function_coverage=1 00:37:10.816 --rc genhtml_legend=1 00:37:10.816 --rc geninfo_all_blocks=1 00:37:10.816 --rc geninfo_unexecuted_blocks=1 00:37:10.816 00:37:10.816 ' 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:10.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.816 --rc genhtml_branch_coverage=1 00:37:10.816 --rc genhtml_function_coverage=1 00:37:10.816 --rc genhtml_legend=1 00:37:10.816 --rc geninfo_all_blocks=1 00:37:10.816 --rc geninfo_unexecuted_blocks=1 00:37:10.816 00:37:10.816 ' 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.816 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:10.817 01:47:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:12.717 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:12.717 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:12.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:12.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:12.717 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:12.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:12.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:37:12.718 00:37:12.718 --- 10.0.0.2 ping statistics --- 00:37:12.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.718 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:12.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:12.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:37:12.718 00:37:12.718 --- 10.0.0.1 ping statistics --- 00:37:12.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.718 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:12.718 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1778828 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1778828 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1778828 ']' 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:12.976 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:12.976 [2024-10-13 01:47:58.347510] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:12.976 [2024-10-13 01:47:58.348602] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:37:12.976 [2024-10-13 01:47:58.348657] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.976 [2024-10-13 01:47:58.411240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:12.976 [2024-10-13 01:47:58.460158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.976 [2024-10-13 01:47:58.460211] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.976 [2024-10-13 01:47:58.460233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.976 [2024-10-13 01:47:58.460244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.976 [2024-10-13 01:47:58.460253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.976 [2024-10-13 01:47:58.461940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:12.976 [2024-10-13 01:47:58.462069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:12.976 [2024-10-13 01:47:58.462320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:12.976 [2024-10-13 01:47:58.462323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.976 [2024-10-13 01:47:58.547778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:12.976 [2024-10-13 01:47:58.548018] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:12.976 [2024-10-13 01:47:58.548300] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:12.976 [2024-10-13 01:47:58.548878] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:12.976 [2024-10-13 01:47:58.549120] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:13.235 [2024-10-13 01:47:58.603026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:13.235 Malloc0 00:37:13.235 [2024-10-13 01:47:58.675157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1778991 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1778991 /var/tmp/bdevperf.sock 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1778991 ']' 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:13.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:13.235 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:13.235 { 00:37:13.235 "params": { 00:37:13.235 "name": "Nvme$subsystem", 00:37:13.235 "trtype": "$TEST_TRANSPORT", 00:37:13.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:13.236 "adrfam": "ipv4", 00:37:13.236 "trsvcid": "$NVMF_PORT", 00:37:13.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:13.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:13.236 "hdgst": ${hdgst:-false}, 00:37:13.236 "ddgst": ${ddgst:-false} 00:37:13.236 }, 00:37:13.236 "method": "bdev_nvme_attach_controller" 00:37:13.236 } 00:37:13.236 EOF 00:37:13.236 )") 00:37:13.236 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:37:13.236 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:37:13.236 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:37:13.236 01:47:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:13.236 "params": { 00:37:13.236 "name": "Nvme0", 00:37:13.236 "trtype": "tcp", 00:37:13.236 "traddr": "10.0.0.2", 00:37:13.236 "adrfam": "ipv4", 00:37:13.236 "trsvcid": "4420", 00:37:13.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:13.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:13.236 "hdgst": false, 00:37:13.236 "ddgst": false 00:37:13.236 }, 00:37:13.236 "method": "bdev_nvme_attach_controller" 00:37:13.236 }' 00:37:13.236 [2024-10-13 01:47:58.749215] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:37:13.236 [2024-10-13 01:47:58.749292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778991 ] 00:37:13.236 [2024-10-13 01:47:58.810171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.494 [2024-10-13 01:47:58.857640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.752 Running I/O for 10 seconds... 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:13.752 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:13.753 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:13.753 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:13.753 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:13.753 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.753 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:13.753 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:13.753 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:14.012 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:14.012 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.013 [2024-10-13 01:47:59.548142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.013 [2024-10-13 01:47:59.548201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.548220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.013 [2024-10-13 01:47:59.548246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.548261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.013 [2024-10-13 01:47:59.548277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.548292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:14.013 [2024-10-13 01:47:59.548306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.548320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbe30 is same with the state(6) to be set 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:14.013 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:14.013 [2024-10-13 01:47:59.557165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.557975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.557990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.558003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.558017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.558031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.558046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.558059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.013 [2024-10-13 01:47:59.558074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.013 [2024-10-13 01:47:59.558087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.558983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.558996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.559011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.559025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.559040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.559053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.559068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:14.014 [2024-10-13 01:47:59.559081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:14.014 [2024-10-13 01:47:59.559166] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12b80c0 was disconnected and freed. reset controller. 00:37:14.014 [2024-10-13 01:47:59.559219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bbe30 (9): Bad file descriptor 00:37:14.014 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:14.014 01:47:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:14.014 [2024-10-13 01:47:59.560329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:37:14.014 task offset: 81920 on job bdev=Nvme0n1 fails 00:37:14.014 00:37:14.014 Latency(us) 00:37:14.014 [2024-10-12T23:47:59.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:14.014 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:14.014 Job: Nvme0n1 ended in about 0.41 seconds with error 00:37:14.014 Verification LBA range: start 0x0 length 0x400 00:37:14.014 Nvme0n1 : 0.41 1575.89 98.49 157.59 0.00 35869.63 2415.12 34369.99 00:37:14.014 [2024-10-12T23:47:59.592Z] =================================================================================================================== 00:37:14.014 [2024-10-12T23:47:59.592Z] Total : 1575.89 98.49 157.59 0.00 35869.63 2415.12 34369.99 00:37:14.014 [2024-10-13 01:47:59.562189] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:14.273 [2024-10-13 01:47:59.613846] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:15.206 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1778991 00:37:15.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1778991) - No such process 00:37:15.206 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:37:15.207 { 00:37:15.207 "params": { 00:37:15.207 "name": "Nvme$subsystem", 00:37:15.207 "trtype": "$TEST_TRANSPORT", 00:37:15.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:15.207 "adrfam": "ipv4", 00:37:15.207 "trsvcid": "$NVMF_PORT", 00:37:15.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:15.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:15.207 "hdgst": ${hdgst:-false}, 00:37:15.207 "ddgst": ${ddgst:-false} 00:37:15.207 }, 00:37:15.207 "method": "bdev_nvme_attach_controller" 00:37:15.207 } 00:37:15.207 EOF 00:37:15.207 )") 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:37:15.207 01:48:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:37:15.207 "params": { 00:37:15.207 "name": "Nvme0", 00:37:15.207 "trtype": "tcp", 00:37:15.207 "traddr": "10.0.0.2", 00:37:15.207 "adrfam": "ipv4", 00:37:15.207 "trsvcid": "4420", 00:37:15.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.207 "hdgst": false, 00:37:15.207 "ddgst": false 00:37:15.207 }, 00:37:15.207 "method": "bdev_nvme_attach_controller" 00:37:15.207 }' 00:37:15.207 [2024-10-13 01:48:00.607745] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:37:15.207 [2024-10-13 01:48:00.607858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779191 ] 00:37:15.207 [2024-10-13 01:48:00.668177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.207 [2024-10-13 01:48:00.715280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.466 Running I/O for 1 seconds... 00:37:16.401 1635.00 IOPS, 102.19 MiB/s 00:37:16.401 Latency(us) 00:37:16.401 [2024-10-12T23:48:01.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.401 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:16.401 Verification LBA range: start 0x0 length 0x400 00:37:16.401 Nvme0n1 : 1.02 1662.95 103.93 0.00 0.00 37694.27 2888.44 33593.27 00:37:16.401 [2024-10-12T23:48:01.979Z] =================================================================================================================== 00:37:16.401 [2024-10-12T23:48:01.979Z] Total : 1662.95 103.93 0.00 0.00 37694.27 2888.44 33593.27 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:16.659 rmmod nvme_tcp 00:37:16.659 rmmod nvme_fabrics 00:37:16.659 rmmod nvme_keyring 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1778828 ']' 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1778828 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1778828 ']' 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1778828 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1778828 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1778828' 00:37:16.659 killing process with pid 1778828 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1778828 00:37:16.659 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1778828 00:37:16.917 [2024-10-13 01:48:02.407864] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:16.917 01:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:19.448 00:37:19.448 real 0m8.419s 00:37:19.448 user 0m16.624s 00:37:19.448 sys 0m3.675s 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:19.448 ************************************ 00:37:19.448 END TEST nvmf_host_management 00:37:19.448 ************************************ 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:19.448 ************************************ 00:37:19.448 START TEST nvmf_lvol 00:37:19.448 ************************************ 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:19.448 * Looking for test storage... 00:37:19.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:19.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.448 --rc genhtml_branch_coverage=1 00:37:19.448 --rc genhtml_function_coverage=1 00:37:19.448 --rc genhtml_legend=1 00:37:19.448 --rc geninfo_all_blocks=1 00:37:19.448 --rc geninfo_unexecuted_blocks=1 00:37:19.448 00:37:19.448 ' 00:37:19.448 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:19.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.448 --rc genhtml_branch_coverage=1 00:37:19.449 --rc genhtml_function_coverage=1 00:37:19.449 --rc genhtml_legend=1 00:37:19.449 --rc geninfo_all_blocks=1 00:37:19.449 --rc geninfo_unexecuted_blocks=1 00:37:19.449 00:37:19.449 ' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:19.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.449 --rc genhtml_branch_coverage=1 00:37:19.449 --rc genhtml_function_coverage=1 00:37:19.449 --rc genhtml_legend=1 00:37:19.449 --rc geninfo_all_blocks=1 00:37:19.449 --rc geninfo_unexecuted_blocks=1 00:37:19.449 00:37:19.449 ' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:19.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:19.449 --rc genhtml_branch_coverage=1 00:37:19.449 --rc genhtml_function_coverage=1 00:37:19.449 --rc genhtml_legend=1 00:37:19.449 --rc geninfo_all_blocks=1 00:37:19.449 --rc geninfo_unexecuted_blocks=1 00:37:19.449 00:37:19.449 ' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:19.449 01:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:21.351 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:21.351 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:21.351 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:21.352 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:21.352 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:21.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:21.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:37:21.352 00:37:21.352 --- 10.0.0.2 ping statistics --- 00:37:21.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:21.352 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:21.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:21.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:37:21.352 00:37:21.352 --- 10.0.0.1 ping statistics --- 00:37:21.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:21.352 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1781455 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1781455 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1781455 ']' 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:21.352 01:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:21.352 [2024-10-13 01:48:06.823878] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:21.352 [2024-10-13 01:48:06.824998] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:37:21.352 [2024-10-13 01:48:06.825052] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:21.352 [2024-10-13 01:48:06.899347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:21.611 [2024-10-13 01:48:06.951143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:21.611 [2024-10-13 01:48:06.951206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:21.611 [2024-10-13 01:48:06.951242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:21.611 [2024-10-13 01:48:06.951260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:21.611 [2024-10-13 01:48:06.951275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:21.611 [2024-10-13 01:48:06.953135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.611 [2024-10-13 01:48:06.953164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:21.611 [2024-10-13 01:48:06.953169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:21.611 [2024-10-13 01:48:07.042577] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:21.611 [2024-10-13 01:48:07.042810] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:21.611 [2024-10-13 01:48:07.042837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:21.611 [2024-10-13 01:48:07.043117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:21.611 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:21.611 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:37:21.611 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:21.611 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:21.611 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:21.611 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:21.611 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:21.869 [2024-10-13 01:48:07.346012] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:21.869 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:22.127 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:22.127 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:22.718 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:22.718 01:48:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:22.718 01:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:22.976 01:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a82da2a7-f511-4f11-8956-1e6134ad0af6 00:37:22.976 01:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a82da2a7-f511-4f11-8956-1e6134ad0af6 lvol 20 00:37:23.541 01:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=94788b03-e8d9-434b-9058-cccfa32b0e18 00:37:23.541 01:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:23.541 01:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 94788b03-e8d9-434b-9058-cccfa32b0e18 00:37:24.107 01:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:24.107 [2024-10-13 01:48:09.638092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.107 01:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:24.365 01:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1782382 00:37:24.365 01:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:24.365 01:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:25.738 01:48:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 94788b03-e8d9-434b-9058-cccfa32b0e18 MY_SNAPSHOT 00:37:25.738 01:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=02223acd-ac4f-49ea-8aa4-7b480b744469 00:37:25.738 01:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 94788b03-e8d9-434b-9058-cccfa32b0e18 30 00:37:25.996 01:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 02223acd-ac4f-49ea-8aa4-7b480b744469 MY_CLONE 00:37:26.562 01:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0d1b05de-7584-427d-b31c-962b0d5742ce 00:37:26.562 01:48:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0d1b05de-7584-427d-b31c-962b0d5742ce 00:37:27.127 01:48:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1782382 00:37:35.237 Initializing NVMe Controllers 00:37:35.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:35.237 Controller IO queue size 128, less than required. 00:37:35.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:35.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:35.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:35.237 Initialization complete. Launching workers. 00:37:35.237 ======================================================== 00:37:35.237 Latency(us) 00:37:35.237 Device Information : IOPS MiB/s Average min max 00:37:35.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10051.90 39.27 12736.78 1281.20 60455.51 00:37:35.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10406.40 40.65 12302.51 3311.99 62227.05 00:37:35.237 ======================================================== 00:37:35.237 Total : 20458.30 79.92 12515.88 1281.20 62227.05 00:37:35.237 00:37:35.237 01:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:35.237 01:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 94788b03-e8d9-434b-9058-cccfa32b0e18 00:37:35.495 01:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a82da2a7-f511-4f11-8956-1e6134ad0af6 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:35.753 rmmod nvme_tcp 00:37:35.753 rmmod nvme_fabrics 00:37:35.753 rmmod nvme_keyring 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1781455 ']' 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1781455 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1781455 ']' 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1781455 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1781455 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1781455' 00:37:35.753 killing process with pid 1781455 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1781455 00:37:35.753 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1781455 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:36.013 01:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:38.625 00:37:38.625 real 0m19.049s 00:37:38.625 user 0m55.721s 00:37:38.625 sys 0m7.957s 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:38.625 ************************************ 00:37:38.625 END TEST nvmf_lvol 00:37:38.625 ************************************ 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:38.625 ************************************ 00:37:38.625 START TEST nvmf_lvs_grow 00:37:38.625 ************************************ 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:38.625 * Looking for test storage... 00:37:38.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.625 --rc genhtml_branch_coverage=1 00:37:38.625 --rc genhtml_function_coverage=1 00:37:38.625 --rc genhtml_legend=1 00:37:38.625 --rc geninfo_all_blocks=1 00:37:38.625 --rc geninfo_unexecuted_blocks=1 00:37:38.625 00:37:38.625 ' 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.625 --rc genhtml_branch_coverage=1 00:37:38.625 --rc genhtml_function_coverage=1 00:37:38.625 --rc genhtml_legend=1 00:37:38.625 --rc geninfo_all_blocks=1 00:37:38.625 --rc geninfo_unexecuted_blocks=1 00:37:38.625 00:37:38.625 ' 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.625 --rc genhtml_branch_coverage=1 00:37:38.625 --rc genhtml_function_coverage=1 00:37:38.625 --rc genhtml_legend=1 00:37:38.625 --rc geninfo_all_blocks=1 00:37:38.625 --rc geninfo_unexecuted_blocks=1 00:37:38.625 00:37:38.625 ' 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:38.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:38.625 --rc genhtml_branch_coverage=1 00:37:38.625 --rc genhtml_function_coverage=1 00:37:38.625 --rc genhtml_legend=1 00:37:38.625 --rc geninfo_all_blocks=1 00:37:38.625 --rc geninfo_unexecuted_blocks=1 00:37:38.625 00:37:38.625 ' 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:38.625 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:38.626 01:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:40.551 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:40.552 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:40.552 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:40.552 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:40.552 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:40.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:40.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:37:40.552 00:37:40.552 --- 10.0.0.2 ping statistics --- 00:37:40.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:40.552 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:37:40.552 01:48:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:40.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:40.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:37:40.552 00:37:40.552 --- 10.0.0.1 ping statistics --- 00:37:40.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:40.552 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1785633 00:37:40.552 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:40.553 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1785633 00:37:40.553 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1785633 ']' 00:37:40.553 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.553 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:40.553 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.553 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:40.553 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:40.553 [2024-10-13 01:48:26.081171] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:40.553 [2024-10-13 01:48:26.082306] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:37:40.553 [2024-10-13 01:48:26.082364] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:40.811 [2024-10-13 01:48:26.150119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.811 [2024-10-13 01:48:26.196901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:40.811 [2024-10-13 01:48:26.196971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:40.811 [2024-10-13 01:48:26.196988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:40.811 [2024-10-13 01:48:26.197001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:40.811 [2024-10-13 01:48:26.197013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:40.811 [2024-10-13 01:48:26.197647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.811 [2024-10-13 01:48:26.285542] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:40.811 [2024-10-13 01:48:26.285880] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:40.811 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:40.811 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:37:40.811 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:37:40.811 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:40.811 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:40.811 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:40.811 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:41.070 [2024-10-13 01:48:26.598304] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:41.070 ************************************ 00:37:41.070 START TEST lvs_grow_clean 00:37:41.070 ************************************ 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:41.070 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:41.328 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:41.586 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:41.586 01:48:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:41.844 01:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:41.844 01:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:41.845 01:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:42.102 01:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:42.102 01:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:42.102 01:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa86097f-9f73-4559-8f76-8a1612fea47f lvol 150 00:37:42.360 01:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=25340b54-416d-4ed0-93de-de5d25f4a3b2 00:37:42.360 01:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:42.360 01:48:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:42.617 [2024-10-13 01:48:28.030126] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:42.617 [2024-10-13 01:48:28.030242] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:42.617 true 00:37:42.617 01:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:42.617 01:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:42.875 01:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:42.875 01:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:43.132 01:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 25340b54-416d-4ed0-93de-de5d25f4a3b2 00:37:43.391 01:48:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:43.649 [2024-10-13 01:48:29.134411] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.649 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:43.907 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1786066 00:37:43.908 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:43.908 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:43.908 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1786066 /var/tmp/bdevperf.sock 00:37:43.908 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1786066 ']' 00:37:43.908 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:43.908 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:43.908 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:43.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:43.908 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:43.908 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:43.908 [2024-10-13 01:48:29.480855] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:37:43.908 [2024-10-13 01:48:29.480936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1786066 ] 00:37:44.166 [2024-10-13 01:48:29.542066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.166 [2024-10-13 01:48:29.590368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:44.166 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:44.166 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:37:44.166 01:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:44.733 Nvme0n1 00:37:44.733 01:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:44.991 [ 00:37:44.991 { 00:37:44.991 "name": "Nvme0n1", 00:37:44.991 "aliases": [ 00:37:44.991 "25340b54-416d-4ed0-93de-de5d25f4a3b2" 00:37:44.991 ], 00:37:44.991 "product_name": "NVMe disk", 00:37:44.991 "block_size": 4096, 00:37:44.991 "num_blocks": 38912, 00:37:44.991 "uuid": "25340b54-416d-4ed0-93de-de5d25f4a3b2", 00:37:44.991 "numa_id": 0, 00:37:44.991 "assigned_rate_limits": { 00:37:44.991 "rw_ios_per_sec": 0, 00:37:44.991 "rw_mbytes_per_sec": 0, 00:37:44.991 "r_mbytes_per_sec": 0, 00:37:44.991 "w_mbytes_per_sec": 0 00:37:44.991 }, 00:37:44.991 "claimed": false, 00:37:44.991 "zoned": false, 00:37:44.991 "supported_io_types": { 00:37:44.991 "read": true, 00:37:44.991 "write": true, 00:37:44.991 "unmap": true, 00:37:44.991 "flush": true, 00:37:44.991 "reset": true, 00:37:44.991 "nvme_admin": true, 00:37:44.991 "nvme_io": true, 00:37:44.991 "nvme_io_md": false, 00:37:44.991 "write_zeroes": true, 00:37:44.991 "zcopy": false, 00:37:44.991 "get_zone_info": false, 00:37:44.991 "zone_management": false, 00:37:44.991 "zone_append": false, 00:37:44.991 "compare": true, 00:37:44.991 "compare_and_write": true, 00:37:44.991 "abort": true, 00:37:44.991 "seek_hole": false, 00:37:44.991 "seek_data": false, 00:37:44.991 "copy": true, 00:37:44.991 "nvme_iov_md": false 00:37:44.991 }, 00:37:44.991 "memory_domains": [ 00:37:44.991 { 00:37:44.991 "dma_device_id": "system", 00:37:44.991 "dma_device_type": 1 00:37:44.991 } 00:37:44.991 ], 00:37:44.991 "driver_specific": { 00:37:44.991 "nvme": [ 00:37:44.991 { 00:37:44.991 "trid": { 00:37:44.991 "trtype": "TCP", 00:37:44.991 "adrfam": "IPv4", 00:37:44.991 "traddr": "10.0.0.2", 00:37:44.991 "trsvcid": "4420", 00:37:44.991 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:44.991 }, 00:37:44.991 "ctrlr_data": { 00:37:44.991 "cntlid": 1, 00:37:44.991 "vendor_id": "0x8086", 00:37:44.991 "model_number": "SPDK bdev Controller", 00:37:44.991 "serial_number": "SPDK0", 00:37:44.991 "firmware_revision": "25.01", 00:37:44.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:44.991 "oacs": { 00:37:44.991 "security": 0, 00:37:44.991 "format": 0, 00:37:44.991 "firmware": 0, 00:37:44.991 "ns_manage": 0 00:37:44.991 }, 00:37:44.992 "multi_ctrlr": true, 00:37:44.992 "ana_reporting": false 00:37:44.992 }, 00:37:44.992 "vs": { 00:37:44.992 "nvme_version": "1.3" 00:37:44.992 }, 00:37:44.992 "ns_data": { 00:37:44.992 "id": 1, 00:37:44.992 "can_share": true 00:37:44.992 } 00:37:44.992 } 00:37:44.992 ], 00:37:44.992 "mp_policy": "active_passive" 00:37:44.992 } 00:37:44.992 } 00:37:44.992 ] 00:37:44.992 01:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1786200 00:37:44.992 01:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:44.992 01:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:45.250 Running I/O for 10 seconds... 00:37:46.184 Latency(us) 00:37:46.184 [2024-10-12T23:48:31.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:46.184 Nvme0n1 : 1.00 14141.00 55.24 0.00 0.00 0.00 0.00 0.00 00:37:46.184 [2024-10-12T23:48:31.762Z] =================================================================================================================== 00:37:46.184 [2024-10-12T23:48:31.762Z] Total : 14141.00 55.24 0.00 0.00 0.00 0.00 0.00 00:37:46.184 00:37:47.118 01:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:47.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.118 Nvme0n1 : 2.00 14439.00 56.40 0.00 0.00 0.00 0.00 0.00 00:37:47.118 [2024-10-12T23:48:32.696Z] =================================================================================================================== 00:37:47.118 [2024-10-12T23:48:32.696Z] Total : 14439.00 56.40 0.00 0.00 0.00 0.00 0.00 00:37:47.118 00:37:47.376 true 00:37:47.376 01:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:47.376 01:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:47.634 01:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:47.634 01:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:47.634 01:48:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1786200 00:37:48.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:48.200 Nvme0n1 : 3.00 14387.00 56.20 0.00 0.00 0.00 0.00 0.00 00:37:48.200 [2024-10-12T23:48:33.778Z] =================================================================================================================== 00:37:48.200 [2024-10-12T23:48:33.778Z] Total : 14387.00 56.20 0.00 0.00 0.00 0.00 0.00 00:37:48.200 00:37:49.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:49.134 Nvme0n1 : 4.00 14583.75 56.97 0.00 0.00 0.00 0.00 0.00 00:37:49.134 [2024-10-12T23:48:34.712Z] =================================================================================================================== 00:37:49.134 [2024-10-12T23:48:34.712Z] Total : 14583.75 56.97 0.00 0.00 0.00 0.00 0.00 00:37:49.134 00:37:50.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:50.067 Nvme0n1 : 5.00 14576.00 56.94 0.00 0.00 0.00 0.00 0.00 00:37:50.067 [2024-10-12T23:48:35.645Z] =================================================================================================================== 00:37:50.067 [2024-10-12T23:48:35.645Z] Total : 14576.00 56.94 0.00 0.00 0.00 0.00 0.00 00:37:50.067 00:37:51.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:51.442 Nvme0n1 : 6.00 14565.67 56.90 0.00 0.00 0.00 0.00 0.00 00:37:51.442 [2024-10-12T23:48:37.020Z] =================================================================================================================== 00:37:51.442 [2024-10-12T23:48:37.020Z] Total : 14565.67 56.90 0.00 0.00 0.00 0.00 0.00 00:37:51.442 00:37:52.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:52.377 Nvme0n1 : 7.00 14568.00 56.91 0.00 0.00 0.00 0.00 0.00 00:37:52.377 [2024-10-12T23:48:37.955Z] =================================================================================================================== 00:37:52.377 [2024-10-12T23:48:37.955Z] Total : 14568.00 56.91 0.00 0.00 0.00 0.00 0.00 00:37:52.377 00:37:53.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:53.309 Nvme0n1 : 8.00 14553.12 56.85 0.00 0.00 0.00 0.00 0.00 00:37:53.309 [2024-10-12T23:48:38.887Z] =================================================================================================================== 00:37:53.309 [2024-10-12T23:48:38.887Z] Total : 14553.12 56.85 0.00 0.00 0.00 0.00 0.00 00:37:53.309 00:37:54.244 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:54.244 Nvme0n1 : 9.00 14549.00 56.83 0.00 0.00 0.00 0.00 0.00 00:37:54.244 [2024-10-12T23:48:39.822Z] =================================================================================================================== 00:37:54.244 [2024-10-12T23:48:39.822Z] Total : 14549.00 56.83 0.00 0.00 0.00 0.00 0.00 00:37:54.244 00:37:55.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:55.177 Nvme0n1 : 10.00 14558.00 56.87 0.00 0.00 0.00 0.00 0.00 00:37:55.177 [2024-10-12T23:48:40.755Z] =================================================================================================================== 00:37:55.177 [2024-10-12T23:48:40.755Z] Total : 14558.00 56.87 0.00 0.00 0.00 0.00 0.00 00:37:55.177 00:37:55.177 00:37:55.177 Latency(us) 00:37:55.177 [2024-10-12T23:48:40.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:55.177 Nvme0n1 : 10.01 14561.70 56.88 0.00 0.00 8784.47 4369.07 19612.25 00:37:55.177 [2024-10-12T23:48:40.755Z] =================================================================================================================== 00:37:55.177 [2024-10-12T23:48:40.755Z] Total : 14561.70 56.88 0.00 0.00 8784.47 4369.07 19612.25 00:37:55.177 { 00:37:55.177 "results": [ 00:37:55.177 { 00:37:55.177 "job": "Nvme0n1", 00:37:55.177 "core_mask": "0x2", 00:37:55.177 "workload": "randwrite", 00:37:55.177 "status": "finished", 00:37:55.177 "queue_depth": 128, 00:37:55.177 "io_size": 4096, 00:37:55.177 "runtime": 10.006248, 00:37:55.177 "iops": 14561.701848684941, 00:37:55.177 "mibps": 56.88164784642555, 00:37:55.177 "io_failed": 0, 00:37:55.177 "io_timeout": 0, 00:37:55.177 "avg_latency_us": 8784.469367430955, 00:37:55.177 "min_latency_us": 4369.066666666667, 00:37:55.177 "max_latency_us": 19612.254814814816 00:37:55.177 } 00:37:55.177 ], 00:37:55.177 "core_count": 1 00:37:55.177 } 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1786066 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1786066 ']' 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1786066 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1786066 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1786066' 00:37:55.177 killing process with pid 1786066 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1786066 00:37:55.177 Received shutdown signal, test time was about 10.000000 seconds 00:37:55.177 00:37:55.177 Latency(us) 00:37:55.177 [2024-10-12T23:48:40.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:55.177 [2024-10-12T23:48:40.755Z] =================================================================================================================== 00:37:55.177 [2024-10-12T23:48:40.755Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:55.177 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1786066 00:37:55.435 01:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:55.693 01:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:55.952 01:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:55.952 01:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:56.210 01:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:56.210 01:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:56.210 01:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:56.469 [2024-10-13 01:48:42.002193] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:56.469 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:56.727 request: 00:37:56.727 { 00:37:56.727 "uuid": "fa86097f-9f73-4559-8f76-8a1612fea47f", 00:37:56.727 "method": "bdev_lvol_get_lvstores", 00:37:56.727 "req_id": 1 00:37:56.727 } 00:37:56.727 Got JSON-RPC error response 00:37:56.727 response: 00:37:56.727 { 00:37:56.727 "code": -19, 00:37:56.727 "message": "No such device" 00:37:56.727 } 00:37:56.985 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:37:56.985 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:56.985 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:56.985 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:56.985 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:57.244 aio_bdev 00:37:57.244 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 25340b54-416d-4ed0-93de-de5d25f4a3b2 00:37:57.244 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=25340b54-416d-4ed0-93de-de5d25f4a3b2 00:37:57.244 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:57.244 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:37:57.244 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:57.244 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:57.244 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:57.502 01:48:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 25340b54-416d-4ed0-93de-de5d25f4a3b2 -t 2000 00:37:57.760 [ 00:37:57.760 { 00:37:57.760 "name": "25340b54-416d-4ed0-93de-de5d25f4a3b2", 00:37:57.760 "aliases": [ 00:37:57.760 "lvs/lvol" 00:37:57.760 ], 00:37:57.760 "product_name": "Logical Volume", 00:37:57.760 "block_size": 4096, 00:37:57.760 "num_blocks": 38912, 00:37:57.760 "uuid": "25340b54-416d-4ed0-93de-de5d25f4a3b2", 00:37:57.760 "assigned_rate_limits": { 00:37:57.760 "rw_ios_per_sec": 0, 00:37:57.760 "rw_mbytes_per_sec": 0, 00:37:57.760 "r_mbytes_per_sec": 0, 00:37:57.760 "w_mbytes_per_sec": 0 00:37:57.760 }, 00:37:57.760 "claimed": false, 00:37:57.760 "zoned": false, 00:37:57.760 "supported_io_types": { 00:37:57.760 "read": true, 00:37:57.760 "write": true, 00:37:57.760 "unmap": true, 00:37:57.760 "flush": false, 00:37:57.760 "reset": true, 00:37:57.760 "nvme_admin": false, 00:37:57.760 "nvme_io": false, 00:37:57.760 "nvme_io_md": false, 00:37:57.760 "write_zeroes": true, 00:37:57.760 "zcopy": false, 00:37:57.760 "get_zone_info": false, 00:37:57.760 "zone_management": false, 00:37:57.760 "zone_append": false, 00:37:57.760 "compare": false, 00:37:57.760 "compare_and_write": false, 00:37:57.760 "abort": false, 00:37:57.760 "seek_hole": true, 00:37:57.760 "seek_data": true, 00:37:57.760 "copy": false, 00:37:57.760 "nvme_iov_md": false 00:37:57.760 }, 00:37:57.760 "driver_specific": { 00:37:57.760 "lvol": { 00:37:57.760 "lvol_store_uuid": "fa86097f-9f73-4559-8f76-8a1612fea47f", 00:37:57.760 "base_bdev": "aio_bdev", 00:37:57.760 "thin_provision": false, 00:37:57.760 "num_allocated_clusters": 38, 00:37:57.760 "snapshot": false, 00:37:57.760 "clone": false, 00:37:57.760 "esnap_clone": false 00:37:57.760 } 00:37:57.760 } 00:37:57.760 } 00:37:57.760 ] 00:37:57.760 01:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:37:57.760 01:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:57.760 01:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:58.018 01:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:58.018 01:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:58.018 01:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:58.277 01:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:58.277 01:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 25340b54-416d-4ed0-93de-de5d25f4a3b2 00:37:58.535 01:48:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa86097f-9f73-4559-8f76-8a1612fea47f 00:37:58.793 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:59.051 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:59.051 00:37:59.051 real 0m17.940s 00:37:59.051 user 0m17.512s 00:37:59.051 sys 0m1.844s 00:37:59.051 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:59.051 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:59.051 ************************************ 00:37:59.051 END TEST lvs_grow_clean 00:37:59.051 ************************************ 00:37:59.051 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:59.051 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:59.051 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:59.051 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:59.310 ************************************ 00:37:59.310 START TEST lvs_grow_dirty 00:37:59.310 ************************************ 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:59.310 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:59.568 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:59.568 01:48:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:59.827 01:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0982b5ce-6812-4c70-99e8-26cfc49d0870 00:37:59.827 01:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:37:59.827 01:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:00.086 01:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:00.086 01:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:00.086 01:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 lvol 150 00:38:00.343 01:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1d59eee1-9ca1-4b7d-8115-5e685d02e9ee 00:38:00.343 01:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:00.343 01:48:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:00.602 [2024-10-13 01:48:46.058131] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:00.602 [2024-10-13 01:48:46.058238] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:00.602 true 00:38:00.602 01:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:00.602 01:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:00.860 01:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:00.860 01:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:01.118 01:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d59eee1-9ca1-4b7d-8115-5e685d02e9ee 00:38:01.376 01:48:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:01.634 [2024-10-13 01:48:47.158407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.634 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1788108 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1788108 /var/tmp/bdevperf.sock 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1788108 ']' 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:01.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:01.892 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:02.150 [2024-10-13 01:48:47.492406] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:02.150 [2024-10-13 01:48:47.492503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1788108 ] 00:38:02.150 [2024-10-13 01:48:47.550269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.150 [2024-10-13 01:48:47.597937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.409 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:02.409 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:02.409 01:48:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:02.666 Nvme0n1 00:38:02.666 01:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:02.923 [ 00:38:02.923 { 00:38:02.923 "name": "Nvme0n1", 00:38:02.923 "aliases": [ 00:38:02.923 "1d59eee1-9ca1-4b7d-8115-5e685d02e9ee" 00:38:02.923 ], 00:38:02.923 "product_name": "NVMe disk", 00:38:02.923 "block_size": 4096, 00:38:02.923 "num_blocks": 38912, 00:38:02.923 "uuid": "1d59eee1-9ca1-4b7d-8115-5e685d02e9ee", 00:38:02.923 "numa_id": 0, 00:38:02.923 "assigned_rate_limits": { 00:38:02.923 "rw_ios_per_sec": 0, 00:38:02.923 "rw_mbytes_per_sec": 0, 00:38:02.923 "r_mbytes_per_sec": 0, 00:38:02.923 "w_mbytes_per_sec": 0 00:38:02.923 }, 00:38:02.923 "claimed": false, 00:38:02.923 "zoned": false, 00:38:02.923 "supported_io_types": { 00:38:02.923 "read": true, 00:38:02.923 "write": true, 00:38:02.923 "unmap": true, 00:38:02.923 "flush": true, 00:38:02.923 "reset": true, 00:38:02.923 "nvme_admin": true, 00:38:02.923 "nvme_io": true, 00:38:02.923 "nvme_io_md": false, 00:38:02.923 "write_zeroes": true, 00:38:02.923 "zcopy": false, 00:38:02.923 "get_zone_info": false, 00:38:02.923 "zone_management": false, 00:38:02.923 "zone_append": false, 00:38:02.923 "compare": true, 00:38:02.923 "compare_and_write": true, 00:38:02.923 "abort": true, 00:38:02.923 "seek_hole": false, 00:38:02.923 "seek_data": false, 00:38:02.923 "copy": true, 00:38:02.923 "nvme_iov_md": false 00:38:02.923 }, 00:38:02.923 "memory_domains": [ 00:38:02.923 { 00:38:02.923 "dma_device_id": "system", 00:38:02.923 "dma_device_type": 1 00:38:02.923 } 00:38:02.923 ], 00:38:02.923 "driver_specific": { 00:38:02.923 "nvme": [ 00:38:02.923 { 00:38:02.923 "trid": { 00:38:02.923 "trtype": "TCP", 00:38:02.923 "adrfam": "IPv4", 00:38:02.923 "traddr": "10.0.0.2", 00:38:02.923 "trsvcid": "4420", 00:38:02.923 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:02.923 }, 00:38:02.923 "ctrlr_data": { 00:38:02.923 "cntlid": 1, 00:38:02.923 "vendor_id": "0x8086", 00:38:02.923 "model_number": "SPDK bdev Controller", 00:38:02.923 "serial_number": "SPDK0", 00:38:02.923 "firmware_revision": "25.01", 00:38:02.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:02.923 "oacs": { 00:38:02.923 "security": 0, 00:38:02.923 "format": 0, 00:38:02.923 "firmware": 0, 00:38:02.923 "ns_manage": 0 00:38:02.923 }, 00:38:02.923 "multi_ctrlr": true, 00:38:02.923 "ana_reporting": false 00:38:02.923 }, 00:38:02.923 "vs": { 00:38:02.923 "nvme_version": "1.3" 00:38:02.923 }, 00:38:02.923 "ns_data": { 00:38:02.923 "id": 1, 00:38:02.923 "can_share": true 00:38:02.923 } 00:38:02.923 } 00:38:02.923 ], 00:38:02.923 "mp_policy": "active_passive" 00:38:02.923 } 00:38:02.923 } 00:38:02.923 ] 00:38:02.923 01:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1788240 00:38:02.923 01:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:02.923 01:48:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:03.181 Running I/O for 10 seconds... 00:38:04.114 Latency(us) 00:38:04.114 [2024-10-12T23:48:49.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.114 Nvme0n1 : 1.00 13697.00 53.50 0.00 0.00 0.00 0.00 0.00 00:38:04.114 [2024-10-12T23:48:49.692Z] =================================================================================================================== 00:38:04.114 [2024-10-12T23:48:49.692Z] Total : 13697.00 53.50 0.00 0.00 0.00 0.00 0.00 00:38:04.114 00:38:05.047 01:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:05.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.047 Nvme0n1 : 2.00 13817.50 53.97 0.00 0.00 0.00 0.00 0.00 00:38:05.047 [2024-10-12T23:48:50.625Z] =================================================================================================================== 00:38:05.047 [2024-10-12T23:48:50.625Z] Total : 13817.50 53.97 0.00 0.00 0.00 0.00 0.00 00:38:05.047 00:38:05.305 true 00:38:05.305 01:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:05.305 01:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:05.563 01:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:05.563 01:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:05.563 01:48:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1788240 00:38:06.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.129 Nvme0n1 : 3.00 13924.67 54.39 0.00 0.00 0.00 0.00 0.00 00:38:06.129 [2024-10-12T23:48:51.707Z] =================================================================================================================== 00:38:06.129 [2024-10-12T23:48:51.707Z] Total : 13924.67 54.39 0.00 0.00 0.00 0.00 0.00 00:38:06.129 00:38:07.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.062 Nvme0n1 : 4.00 13993.75 54.66 0.00 0.00 0.00 0.00 0.00 00:38:07.062 [2024-10-12T23:48:52.640Z] =================================================================================================================== 00:38:07.062 [2024-10-12T23:48:52.640Z] Total : 13993.75 54.66 0.00 0.00 0.00 0.00 0.00 00:38:07.062 00:38:07.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.995 Nvme0n1 : 5.00 14073.00 54.97 0.00 0.00 0.00 0.00 0.00 00:38:07.995 [2024-10-12T23:48:53.573Z] =================================================================================================================== 00:38:07.995 [2024-10-12T23:48:53.573Z] Total : 14073.00 54.97 0.00 0.00 0.00 0.00 0.00 00:38:07.995 00:38:09.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:09.028 Nvme0n1 : 6.00 14220.33 55.55 0.00 0.00 0.00 0.00 0.00 00:38:09.028 [2024-10-12T23:48:54.606Z] =================================================================================================================== 00:38:09.028 [2024-10-12T23:48:54.606Z] Total : 14220.33 55.55 0.00 0.00 0.00 0.00 0.00 00:38:09.028 00:38:09.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:09.963 Nvme0n1 : 7.00 14306.43 55.88 0.00 0.00 0.00 0.00 0.00 00:38:09.963 [2024-10-12T23:48:55.541Z] =================================================================================================================== 00:38:09.963 [2024-10-12T23:48:55.541Z] Total : 14306.43 55.88 0.00 0.00 0.00 0.00 0.00 00:38:09.963 00:38:11.336 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:11.336 Nvme0n1 : 8.00 14324.00 55.95 0.00 0.00 0.00 0.00 0.00 00:38:11.336 [2024-10-12T23:48:56.914Z] =================================================================================================================== 00:38:11.336 [2024-10-12T23:48:56.914Z] Total : 14324.00 55.95 0.00 0.00 0.00 0.00 0.00 00:38:11.336 00:38:12.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:12.270 Nvme0n1 : 9.00 14345.44 56.04 0.00 0.00 0.00 0.00 0.00 00:38:12.270 [2024-10-12T23:48:57.848Z] =================================================================================================================== 00:38:12.270 [2024-10-12T23:48:57.848Z] Total : 14345.44 56.04 0.00 0.00 0.00 0.00 0.00 00:38:12.270 00:38:13.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.202 Nvme0n1 : 10.00 14356.40 56.08 0.00 0.00 0.00 0.00 0.00 00:38:13.202 [2024-10-12T23:48:58.780Z] =================================================================================================================== 00:38:13.202 [2024-10-12T23:48:58.780Z] Total : 14356.40 56.08 0.00 0.00 0.00 0.00 0.00 00:38:13.202 00:38:13.202 00:38:13.202 Latency(us) 00:38:13.202 [2024-10-12T23:48:58.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.202 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.202 Nvme0n1 : 10.00 14363.80 56.11 0.00 0.00 8906.68 2415.12 17864.63 00:38:13.202 [2024-10-12T23:48:58.780Z] =================================================================================================================== 00:38:13.202 [2024-10-12T23:48:58.780Z] Total : 14363.80 56.11 0.00 0.00 8906.68 2415.12 17864.63 00:38:13.202 { 00:38:13.202 "results": [ 00:38:13.202 { 00:38:13.202 "job": "Nvme0n1", 00:38:13.202 "core_mask": "0x2", 00:38:13.202 "workload": "randwrite", 00:38:13.202 "status": "finished", 00:38:13.202 "queue_depth": 128, 00:38:13.202 "io_size": 4096, 00:38:13.202 "runtime": 10.00376, 00:38:13.202 "iops": 14363.799211496478, 00:38:13.202 "mibps": 56.10859066990812, 00:38:13.202 "io_failed": 0, 00:38:13.202 "io_timeout": 0, 00:38:13.202 "avg_latency_us": 8906.682512245843, 00:38:13.202 "min_latency_us": 2415.122962962963, 00:38:13.202 "max_latency_us": 17864.62814814815 00:38:13.202 } 00:38:13.202 ], 00:38:13.202 "core_count": 1 00:38:13.202 } 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1788108 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1788108 ']' 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1788108 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1788108 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1788108' 00:38:13.202 killing process with pid 1788108 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1788108 00:38:13.202 Received shutdown signal, test time was about 10.000000 seconds 00:38:13.202 00:38:13.202 Latency(us) 00:38:13.202 [2024-10-12T23:48:58.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.202 [2024-10-12T23:48:58.780Z] =================================================================================================================== 00:38:13.202 [2024-10-12T23:48:58.780Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1788108 00:38:13.202 01:48:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:13.768 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:13.768 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:13.768 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1785633 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1785633 00:38:14.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1785633 Killed "${NVMF_APP[@]}" "$@" 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1789555 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1789555 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1789555 ']' 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:14.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:14.334 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:14.334 [2024-10-13 01:48:59.693153] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:14.334 [2024-10-13 01:48:59.694259] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:14.334 [2024-10-13 01:48:59.694337] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:14.334 [2024-10-13 01:48:59.764119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.334 [2024-10-13 01:48:59.813008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:14.334 [2024-10-13 01:48:59.813075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:14.334 [2024-10-13 01:48:59.813091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:14.334 [2024-10-13 01:48:59.813105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:14.334 [2024-10-13 01:48:59.813117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:14.334 [2024-10-13 01:48:59.813768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.334 [2024-10-13 01:48:59.908435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:14.334 [2024-10-13 01:48:59.908796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:14.593 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:14.593 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:38:14.593 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:14.593 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:14.593 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:14.593 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:14.593 01:48:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:14.851 [2024-10-13 01:49:00.297050] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:14.851 [2024-10-13 01:49:00.297223] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:14.851 [2024-10-13 01:49:00.297283] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:14.851 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:14.851 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1d59eee1-9ca1-4b7d-8115-5e685d02e9ee 00:38:14.851 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1d59eee1-9ca1-4b7d-8115-5e685d02e9ee 00:38:14.851 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:14.851 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:14.851 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:14.851 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:14.851 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:15.109 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d59eee1-9ca1-4b7d-8115-5e685d02e9ee -t 2000 00:38:15.368 [ 00:38:15.368 { 00:38:15.368 "name": "1d59eee1-9ca1-4b7d-8115-5e685d02e9ee", 00:38:15.368 "aliases": [ 00:38:15.368 "lvs/lvol" 00:38:15.368 ], 00:38:15.368 "product_name": "Logical Volume", 00:38:15.368 "block_size": 4096, 00:38:15.368 "num_blocks": 38912, 00:38:15.368 "uuid": "1d59eee1-9ca1-4b7d-8115-5e685d02e9ee", 00:38:15.368 "assigned_rate_limits": { 00:38:15.368 "rw_ios_per_sec": 0, 00:38:15.368 "rw_mbytes_per_sec": 0, 00:38:15.368 "r_mbytes_per_sec": 0, 00:38:15.368 "w_mbytes_per_sec": 0 00:38:15.368 }, 00:38:15.368 "claimed": false, 00:38:15.368 "zoned": false, 00:38:15.368 "supported_io_types": { 00:38:15.368 "read": true, 00:38:15.368 "write": true, 00:38:15.368 "unmap": true, 00:38:15.368 "flush": false, 00:38:15.368 "reset": true, 00:38:15.368 "nvme_admin": false, 00:38:15.368 "nvme_io": false, 00:38:15.368 "nvme_io_md": false, 00:38:15.368 "write_zeroes": true, 00:38:15.368 "zcopy": false, 00:38:15.368 "get_zone_info": false, 00:38:15.368 "zone_management": false, 00:38:15.368 "zone_append": false, 00:38:15.368 "compare": false, 00:38:15.368 "compare_and_write": false, 00:38:15.368 "abort": false, 00:38:15.368 "seek_hole": true, 00:38:15.368 "seek_data": true, 00:38:15.368 "copy": false, 00:38:15.368 "nvme_iov_md": false 00:38:15.368 }, 00:38:15.368 "driver_specific": { 00:38:15.368 "lvol": { 00:38:15.368 "lvol_store_uuid": "0982b5ce-6812-4c70-99e8-26cfc49d0870", 00:38:15.368 "base_bdev": "aio_bdev", 00:38:15.368 "thin_provision": false, 00:38:15.368 "num_allocated_clusters": 38, 00:38:15.368 "snapshot": false, 00:38:15.368 "clone": false, 00:38:15.368 "esnap_clone": false 00:38:15.368 } 00:38:15.368 } 00:38:15.368 } 00:38:15.368 ] 00:38:15.368 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:15.368 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:15.368 01:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:15.627 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:15.627 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:15.627 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:16.193 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:16.193 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:16.452 [2024-10-13 01:49:01.774410] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:16.452 01:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:16.710 request: 00:38:16.710 { 00:38:16.710 "uuid": "0982b5ce-6812-4c70-99e8-26cfc49d0870", 00:38:16.710 "method": "bdev_lvol_get_lvstores", 00:38:16.710 "req_id": 1 00:38:16.710 } 00:38:16.710 Got JSON-RPC error response 00:38:16.710 response: 00:38:16.710 { 00:38:16.710 "code": -19, 00:38:16.710 "message": "No such device" 00:38:16.710 } 00:38:16.710 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:38:16.710 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:16.710 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:16.710 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:16.710 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:16.968 aio_bdev 00:38:16.968 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1d59eee1-9ca1-4b7d-8115-5e685d02e9ee 00:38:16.968 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=1d59eee1-9ca1-4b7d-8115-5e685d02e9ee 00:38:16.968 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:16.968 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:38:16.968 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:16.968 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:16.968 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:17.226 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d59eee1-9ca1-4b7d-8115-5e685d02e9ee -t 2000 00:38:17.485 [ 00:38:17.485 { 00:38:17.485 "name": "1d59eee1-9ca1-4b7d-8115-5e685d02e9ee", 00:38:17.485 "aliases": [ 00:38:17.485 "lvs/lvol" 00:38:17.485 ], 00:38:17.485 "product_name": "Logical Volume", 00:38:17.485 "block_size": 4096, 00:38:17.485 "num_blocks": 38912, 00:38:17.485 "uuid": "1d59eee1-9ca1-4b7d-8115-5e685d02e9ee", 00:38:17.485 "assigned_rate_limits": { 00:38:17.485 "rw_ios_per_sec": 0, 00:38:17.485 "rw_mbytes_per_sec": 0, 00:38:17.485 "r_mbytes_per_sec": 0, 00:38:17.485 "w_mbytes_per_sec": 0 00:38:17.485 }, 00:38:17.485 "claimed": false, 00:38:17.485 "zoned": false, 00:38:17.485 "supported_io_types": { 00:38:17.485 "read": true, 00:38:17.485 "write": true, 00:38:17.485 "unmap": true, 00:38:17.485 "flush": false, 00:38:17.485 "reset": true, 00:38:17.485 "nvme_admin": false, 00:38:17.485 "nvme_io": false, 00:38:17.485 "nvme_io_md": false, 00:38:17.485 "write_zeroes": true, 00:38:17.485 "zcopy": false, 00:38:17.485 "get_zone_info": false, 00:38:17.485 "zone_management": false, 00:38:17.485 "zone_append": false, 00:38:17.485 "compare": false, 00:38:17.485 "compare_and_write": false, 00:38:17.485 "abort": false, 00:38:17.485 "seek_hole": true, 00:38:17.485 "seek_data": true, 00:38:17.485 "copy": false, 00:38:17.485 "nvme_iov_md": false 00:38:17.485 }, 00:38:17.485 "driver_specific": { 00:38:17.485 "lvol": { 00:38:17.485 "lvol_store_uuid": "0982b5ce-6812-4c70-99e8-26cfc49d0870", 00:38:17.485 "base_bdev": "aio_bdev", 00:38:17.485 "thin_provision": false, 00:38:17.485 "num_allocated_clusters": 38, 00:38:17.485 "snapshot": false, 00:38:17.485 "clone": false, 00:38:17.485 "esnap_clone": false 00:38:17.485 } 00:38:17.485 } 00:38:17.485 } 00:38:17.485 ] 00:38:17.485 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:38:17.485 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:17.485 01:49:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:17.743 01:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:17.743 01:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:17.743 01:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:18.001 01:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:18.001 01:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d59eee1-9ca1-4b7d-8115-5e685d02e9ee 00:38:18.259 01:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0982b5ce-6812-4c70-99e8-26cfc49d0870 00:38:18.518 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:19.084 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:19.084 00:38:19.084 real 0m19.797s 00:38:19.084 user 0m36.623s 00:38:19.084 sys 0m4.854s 00:38:19.084 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:19.084 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:19.084 ************************************ 00:38:19.084 END TEST lvs_grow_dirty 00:38:19.084 ************************************ 00:38:19.084 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:19.084 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:38:19.084 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:19.085 nvmf_trace.0 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:19.085 rmmod nvme_tcp 00:38:19.085 rmmod nvme_fabrics 00:38:19.085 rmmod nvme_keyring 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1789555 ']' 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1789555 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1789555 ']' 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1789555 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1789555 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1789555' 00:38:19.085 killing process with pid 1789555 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1789555 00:38:19.085 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1789555 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:19.343 01:49:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:21.875 00:38:21.875 real 0m43.213s 00:38:21.875 user 0m55.911s 00:38:21.875 sys 0m8.695s 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:21.875 ************************************ 00:38:21.875 END TEST nvmf_lvs_grow 00:38:21.875 ************************************ 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:21.875 ************************************ 00:38:21.875 START TEST nvmf_bdev_io_wait 00:38:21.875 ************************************ 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:21.875 * Looking for test storage... 00:38:21.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:38:21.875 01:49:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:21.875 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:21.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.876 --rc genhtml_branch_coverage=1 00:38:21.876 --rc genhtml_function_coverage=1 00:38:21.876 --rc genhtml_legend=1 00:38:21.876 --rc geninfo_all_blocks=1 00:38:21.876 --rc geninfo_unexecuted_blocks=1 00:38:21.876 00:38:21.876 ' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:21.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.876 --rc genhtml_branch_coverage=1 00:38:21.876 --rc genhtml_function_coverage=1 00:38:21.876 --rc genhtml_legend=1 00:38:21.876 --rc geninfo_all_blocks=1 00:38:21.876 --rc geninfo_unexecuted_blocks=1 00:38:21.876 00:38:21.876 ' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:21.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.876 --rc genhtml_branch_coverage=1 00:38:21.876 --rc genhtml_function_coverage=1 00:38:21.876 --rc genhtml_legend=1 00:38:21.876 --rc geninfo_all_blocks=1 00:38:21.876 --rc geninfo_unexecuted_blocks=1 00:38:21.876 00:38:21.876 ' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:21.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:21.876 --rc genhtml_branch_coverage=1 00:38:21.876 --rc genhtml_function_coverage=1 00:38:21.876 --rc genhtml_legend=1 00:38:21.876 --rc geninfo_all_blocks=1 00:38:21.876 --rc geninfo_unexecuted_blocks=1 00:38:21.876 00:38:21.876 ' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:21.876 01:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:23.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.777 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:23.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:23.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:23.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:23.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:23.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:38:23.778 00:38:23.778 --- 10.0.0.2 ping statistics --- 00:38:23.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.778 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:23.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:23.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:38:23.778 00:38:23.778 --- 10.0.0.1 ping statistics --- 00:38:23.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.778 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1792079 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1792079 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1792079 ']' 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:23.778 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:23.778 [2024-10-13 01:49:09.306337] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:23.778 [2024-10-13 01:49:09.307548] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:23.778 [2024-10-13 01:49:09.307605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.036 [2024-10-13 01:49:09.378311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:24.036 [2024-10-13 01:49:09.430453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.036 [2024-10-13 01:49:09.430531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.036 [2024-10-13 01:49:09.430548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.036 [2024-10-13 01:49:09.430562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.036 [2024-10-13 01:49:09.430574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.036 [2024-10-13 01:49:09.432203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.036 [2024-10-13 01:49:09.432271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:24.036 [2024-10-13 01:49:09.432363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:24.036 [2024-10-13 01:49:09.432366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.036 [2024-10-13 01:49:09.432865] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.036 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.294 [2024-10-13 01:49:09.644694] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:24.294 [2024-10-13 01:49:09.644883] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:24.294 [2024-10-13 01:49:09.645876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:24.294 [2024-10-13 01:49:09.646796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:24.294 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.294 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.295 [2024-10-13 01:49:09.653107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.295 Malloc0 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:24.295 [2024-10-13 01:49:09.709250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1792225 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1792226 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1792229 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:24.295 { 00:38:24.295 "params": { 00:38:24.295 "name": "Nvme$subsystem", 00:38:24.295 "trtype": "$TEST_TRANSPORT", 00:38:24.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.295 "adrfam": "ipv4", 00:38:24.295 "trsvcid": "$NVMF_PORT", 00:38:24.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.295 "hdgst": ${hdgst:-false}, 00:38:24.295 "ddgst": ${ddgst:-false} 00:38:24.295 }, 00:38:24.295 "method": "bdev_nvme_attach_controller" 00:38:24.295 } 00:38:24.295 EOF 00:38:24.295 )") 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:24.295 { 00:38:24.295 "params": { 00:38:24.295 "name": "Nvme$subsystem", 00:38:24.295 "trtype": "$TEST_TRANSPORT", 00:38:24.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.295 "adrfam": "ipv4", 00:38:24.295 "trsvcid": "$NVMF_PORT", 00:38:24.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.295 "hdgst": ${hdgst:-false}, 00:38:24.295 "ddgst": ${ddgst:-false} 00:38:24.295 }, 00:38:24.295 "method": "bdev_nvme_attach_controller" 00:38:24.295 } 00:38:24.295 EOF 00:38:24.295 )") 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1792231 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:24.295 { 00:38:24.295 "params": { 00:38:24.295 "name": "Nvme$subsystem", 00:38:24.295 "trtype": "$TEST_TRANSPORT", 00:38:24.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.295 "adrfam": "ipv4", 00:38:24.295 "trsvcid": "$NVMF_PORT", 00:38:24.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.295 "hdgst": ${hdgst:-false}, 00:38:24.295 "ddgst": ${ddgst:-false} 00:38:24.295 }, 00:38:24.295 "method": "bdev_nvme_attach_controller" 00:38:24.295 } 00:38:24.295 EOF 00:38:24.295 )") 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:24.295 { 00:38:24.295 "params": { 00:38:24.295 "name": "Nvme$subsystem", 00:38:24.295 "trtype": "$TEST_TRANSPORT", 00:38:24.295 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:24.295 "adrfam": "ipv4", 00:38:24.295 "trsvcid": "$NVMF_PORT", 00:38:24.295 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:24.295 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:24.295 "hdgst": ${hdgst:-false}, 00:38:24.295 "ddgst": ${ddgst:-false} 00:38:24.295 }, 00:38:24.295 "method": "bdev_nvme_attach_controller" 00:38:24.295 } 00:38:24.295 EOF 00:38:24.295 )") 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1792225 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:24.295 "params": { 00:38:24.295 "name": "Nvme1", 00:38:24.295 "trtype": "tcp", 00:38:24.295 "traddr": "10.0.0.2", 00:38:24.295 "adrfam": "ipv4", 00:38:24.295 "trsvcid": "4420", 00:38:24.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:24.295 "hdgst": false, 00:38:24.295 "ddgst": false 00:38:24.295 }, 00:38:24.295 "method": "bdev_nvme_attach_controller" 00:38:24.295 }' 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:24.295 "params": { 00:38:24.295 "name": "Nvme1", 00:38:24.295 "trtype": "tcp", 00:38:24.295 "traddr": "10.0.0.2", 00:38:24.295 "adrfam": "ipv4", 00:38:24.295 "trsvcid": "4420", 00:38:24.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:24.295 "hdgst": false, 00:38:24.295 "ddgst": false 00:38:24.295 }, 00:38:24.295 "method": "bdev_nvme_attach_controller" 00:38:24.295 }' 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:24.295 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:24.295 "params": { 00:38:24.295 "name": "Nvme1", 00:38:24.295 "trtype": "tcp", 00:38:24.295 "traddr": "10.0.0.2", 00:38:24.295 "adrfam": "ipv4", 00:38:24.295 "trsvcid": "4420", 00:38:24.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:24.296 "hdgst": false, 00:38:24.296 "ddgst": false 00:38:24.296 }, 00:38:24.296 "method": "bdev_nvme_attach_controller" 00:38:24.296 }' 00:38:24.296 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:38:24.296 01:49:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:24.296 "params": { 00:38:24.296 "name": "Nvme1", 00:38:24.296 "trtype": "tcp", 00:38:24.296 "traddr": "10.0.0.2", 00:38:24.296 "adrfam": "ipv4", 00:38:24.296 "trsvcid": "4420", 00:38:24.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:24.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:24.296 "hdgst": false, 00:38:24.296 "ddgst": false 00:38:24.296 }, 00:38:24.296 "method": "bdev_nvme_attach_controller" 00:38:24.296 }' 00:38:24.296 [2024-10-13 01:49:09.760303] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:24.296 [2024-10-13 01:49:09.760348] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:24.296 [2024-10-13 01:49:09.760349] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:24.296 [2024-10-13 01:49:09.760348] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:24.296 [2024-10-13 01:49:09.760397] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:24.296 [2024-10-13 01:49:09.760441] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-13 01:49:09.760441] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-13 01:49:09.760441] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:24.296 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:24.296 --proc-type=auto ] 00:38:24.554 [2024-10-13 01:49:09.936387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.554 [2024-10-13 01:49:09.978081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:24.554 [2024-10-13 01:49:10.036078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.554 [2024-10-13 01:49:10.079489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:24.812 [2024-10-13 01:49:10.136799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.812 [2024-10-13 01:49:10.178238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:24.812 [2024-10-13 01:49:10.205436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.812 [2024-10-13 01:49:10.242946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:24.812 Running I/O for 1 seconds... 00:38:25.070 Running I/O for 1 seconds... 00:38:25.070 Running I/O for 1 seconds... 00:38:25.070 Running I/O for 1 seconds... 00:38:26.003 182552.00 IOPS, 713.09 MiB/s 00:38:26.003 Latency(us) 00:38:26.003 [2024-10-12T23:49:11.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.003 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:26.004 Nvme1n1 : 1.00 182199.25 711.72 0.00 0.00 698.78 307.96 1929.67 00:38:26.004 [2024-10-12T23:49:11.582Z] =================================================================================================================== 00:38:26.004 [2024-10-12T23:49:11.582Z] Total : 182199.25 711.72 0.00 0.00 698.78 307.96 1929.67 00:38:26.004 9184.00 IOPS, 35.88 MiB/s 00:38:26.004 Latency(us) 00:38:26.004 [2024-10-12T23:49:11.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.004 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:26.004 Nvme1n1 : 1.01 9245.11 36.11 0.00 0.00 13786.83 1966.08 16019.91 00:38:26.004 [2024-10-12T23:49:11.582Z] =================================================================================================================== 00:38:26.004 [2024-10-12T23:49:11.582Z] Total : 9245.11 36.11 0.00 0.00 13786.83 1966.08 16019.91 00:38:26.004 8075.00 IOPS, 31.54 MiB/s 00:38:26.004 Latency(us) 00:38:26.004 [2024-10-12T23:49:11.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.004 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:26.004 Nvme1n1 : 1.01 8121.58 31.72 0.00 0.00 15676.04 4684.61 20583.16 00:38:26.004 [2024-10-12T23:49:11.582Z] =================================================================================================================== 00:38:26.004 [2024-10-12T23:49:11.582Z] Total : 8121.58 31.72 0.00 0.00 15676.04 4684.61 20583.16 00:38:26.004 7749.00 IOPS, 30.27 MiB/s 00:38:26.004 Latency(us) 00:38:26.004 [2024-10-12T23:49:11.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:26.004 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:26.004 Nvme1n1 : 1.01 7821.72 30.55 0.00 0.00 16294.60 2475.80 23592.96 00:38:26.004 [2024-10-12T23:49:11.582Z] =================================================================================================================== 00:38:26.004 [2024-10-12T23:49:11.582Z] Total : 7821.72 30.55 0.00 0.00 16294.60 2475.80 23592.96 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1792226 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1792229 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1792231 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:26.262 rmmod nvme_tcp 00:38:26.262 rmmod nvme_fabrics 00:38:26.262 rmmod nvme_keyring 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1792079 ']' 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1792079 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1792079 ']' 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1792079 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1792079 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1792079' 00:38:26.262 killing process with pid 1792079 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1792079 00:38:26.262 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1792079 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:26.520 01:49:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.422 01:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:28.681 00:38:28.681 real 0m7.097s 00:38:28.681 user 0m13.333s 00:38:28.681 sys 0m4.247s 00:38:28.681 01:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:28.681 ************************************ 00:38:28.681 END TEST nvmf_bdev_io_wait 00:38:28.681 ************************************ 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:28.681 ************************************ 00:38:28.681 START TEST nvmf_queue_depth 00:38:28.681 ************************************ 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:28.681 * Looking for test storage... 00:38:28.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:28.681 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:28.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.682 --rc genhtml_branch_coverage=1 00:38:28.682 --rc genhtml_function_coverage=1 00:38:28.682 --rc genhtml_legend=1 00:38:28.682 --rc geninfo_all_blocks=1 00:38:28.682 --rc geninfo_unexecuted_blocks=1 00:38:28.682 00:38:28.682 ' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:28.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.682 --rc genhtml_branch_coverage=1 00:38:28.682 --rc genhtml_function_coverage=1 00:38:28.682 --rc genhtml_legend=1 00:38:28.682 --rc geninfo_all_blocks=1 00:38:28.682 --rc geninfo_unexecuted_blocks=1 00:38:28.682 00:38:28.682 ' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:28.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.682 --rc genhtml_branch_coverage=1 00:38:28.682 --rc genhtml_function_coverage=1 00:38:28.682 --rc genhtml_legend=1 00:38:28.682 --rc geninfo_all_blocks=1 00:38:28.682 --rc geninfo_unexecuted_blocks=1 00:38:28.682 00:38:28.682 ' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:28.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.682 --rc genhtml_branch_coverage=1 00:38:28.682 --rc genhtml_function_coverage=1 00:38:28.682 --rc genhtml_legend=1 00:38:28.682 --rc geninfo_all_blocks=1 00:38:28.682 --rc geninfo_unexecuted_blocks=1 00:38:28.682 00:38:28.682 ' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.682 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.683 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:28.683 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:28.683 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:28.683 01:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.216 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:31.217 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:31.217 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:31.217 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:31.217 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:31.217 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:31.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:31.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:38:31.218 00:38:31.218 --- 10.0.0.2 ping statistics --- 00:38:31.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:31.218 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:31.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:31.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:38:31.218 00:38:31.218 --- 10.0.0.1 ping statistics --- 00:38:31.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:31.218 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1794460 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1794460 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1794460 ']' 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:31.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:31.218 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.218 [2024-10-13 01:49:16.580947] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:31.218 [2024-10-13 01:49:16.582025] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:31.218 [2024-10-13 01:49:16.582093] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:31.218 [2024-10-13 01:49:16.648594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.218 [2024-10-13 01:49:16.694790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:31.218 [2024-10-13 01:49:16.694844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:31.218 [2024-10-13 01:49:16.694866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:31.218 [2024-10-13 01:49:16.694876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:31.218 [2024-10-13 01:49:16.694885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:31.218 [2024-10-13 01:49:16.695456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:31.218 [2024-10-13 01:49:16.778516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:31.218 [2024-10-13 01:49:16.778832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.477 [2024-10-13 01:49:16.836088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.477 Malloc0 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.477 [2024-10-13 01:49:16.900191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1794485 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1794485 /var/tmp/bdevperf.sock 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1794485 ']' 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:31.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:31.477 01:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.477 [2024-10-13 01:49:16.949755] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:31.477 [2024-10-13 01:49:16.949860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794485 ] 00:38:31.477 [2024-10-13 01:49:17.021694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.735 [2024-10-13 01:49:17.071643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.735 01:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:31.735 01:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:38:31.736 01:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:31.736 01:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:31.736 01:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:31.736 NVMe0n1 00:38:31.736 01:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:31.736 01:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:31.994 Running I/O for 10 seconds... 00:38:33.859 8192.00 IOPS, 32.00 MiB/s [2024-10-12T23:49:20.391Z] 8192.00 IOPS, 32.00 MiB/s [2024-10-12T23:49:21.764Z] 8192.33 IOPS, 32.00 MiB/s [2024-10-12T23:49:22.697Z] 8192.25 IOPS, 32.00 MiB/s [2024-10-12T23:49:23.632Z] 8261.40 IOPS, 32.27 MiB/s [2024-10-12T23:49:24.565Z] 8288.17 IOPS, 32.38 MiB/s [2024-10-12T23:49:25.499Z] 8318.86 IOPS, 32.50 MiB/s [2024-10-12T23:49:26.433Z] 8319.25 IOPS, 32.50 MiB/s [2024-10-12T23:49:27.807Z] 8307.78 IOPS, 32.45 MiB/s [2024-10-12T23:49:27.807Z] 8300.00 IOPS, 32.42 MiB/s 00:38:42.229 Latency(us) 00:38:42.229 [2024-10-12T23:49:27.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:42.229 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:42.229 Verification LBA range: start 0x0 length 0x4000 00:38:42.229 NVMe0n1 : 10.08 8337.68 32.57 0.00 0.00 122300.26 19709.35 72235.24 00:38:42.229 [2024-10-12T23:49:27.807Z] =================================================================================================================== 00:38:42.229 [2024-10-12T23:49:27.807Z] Total : 8337.68 32.57 0.00 0.00 122300.26 19709.35 72235.24 00:38:42.229 { 00:38:42.229 "results": [ 00:38:42.229 { 00:38:42.229 "job": "NVMe0n1", 00:38:42.229 "core_mask": "0x1", 00:38:42.229 "workload": "verify", 00:38:42.229 "status": "finished", 00:38:42.229 "verify_range": { 00:38:42.229 "start": 0, 00:38:42.229 "length": 16384 00:38:42.229 }, 00:38:42.229 "queue_depth": 1024, 00:38:42.229 "io_size": 4096, 00:38:42.229 "runtime": 10.077258, 00:38:42.229 "iops": 8337.684715425565, 00:38:42.229 "mibps": 32.56908091963111, 00:38:42.229 "io_failed": 0, 00:38:42.229 "io_timeout": 0, 00:38:42.229 "avg_latency_us": 122300.2586837594, 00:38:42.229 "min_latency_us": 19709.345185185186, 00:38:42.229 "max_latency_us": 72235.23555555556 00:38:42.229 } 00:38:42.229 ], 00:38:42.229 "core_count": 1 00:38:42.229 } 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1794485 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1794485 ']' 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1794485 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1794485 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1794485' 00:38:42.229 killing process with pid 1794485 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1794485 00:38:42.229 Received shutdown signal, test time was about 10.000000 seconds 00:38:42.229 00:38:42.229 Latency(us) 00:38:42.229 [2024-10-12T23:49:27.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:42.229 [2024-10-12T23:49:27.807Z] =================================================================================================================== 00:38:42.229 [2024-10-12T23:49:27.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1794485 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:42.229 rmmod nvme_tcp 00:38:42.229 rmmod nvme_fabrics 00:38:42.229 rmmod nvme_keyring 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1794460 ']' 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1794460 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1794460 ']' 00:38:42.229 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1794460 00:38:42.487 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:38:42.487 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:42.487 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1794460 00:38:42.487 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:42.487 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:42.487 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1794460' 00:38:42.487 killing process with pid 1794460 00:38:42.487 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1794460 00:38:42.487 01:49:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1794460 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:42.746 01:49:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:44.650 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:44.650 00:38:44.650 real 0m16.085s 00:38:44.650 user 0m22.122s 00:38:44.650 sys 0m3.274s 00:38:44.650 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:44.650 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:44.650 ************************************ 00:38:44.650 END TEST nvmf_queue_depth 00:38:44.650 ************************************ 00:38:44.650 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:44.650 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:44.650 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:44.650 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:44.650 ************************************ 00:38:44.650 START TEST nvmf_target_multipath 00:38:44.650 ************************************ 00:38:44.650 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:44.910 * Looking for test storage... 00:38:44.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:44.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.910 --rc genhtml_branch_coverage=1 00:38:44.910 --rc genhtml_function_coverage=1 00:38:44.910 --rc genhtml_legend=1 00:38:44.910 --rc geninfo_all_blocks=1 00:38:44.910 --rc geninfo_unexecuted_blocks=1 00:38:44.910 00:38:44.910 ' 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:44.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.910 --rc genhtml_branch_coverage=1 00:38:44.910 --rc genhtml_function_coverage=1 00:38:44.910 --rc genhtml_legend=1 00:38:44.910 --rc geninfo_all_blocks=1 00:38:44.910 --rc geninfo_unexecuted_blocks=1 00:38:44.910 00:38:44.910 ' 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:44.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.910 --rc genhtml_branch_coverage=1 00:38:44.910 --rc genhtml_function_coverage=1 00:38:44.910 --rc genhtml_legend=1 00:38:44.910 --rc geninfo_all_blocks=1 00:38:44.910 --rc geninfo_unexecuted_blocks=1 00:38:44.910 00:38:44.910 ' 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:44.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:44.910 --rc genhtml_branch_coverage=1 00:38:44.910 --rc genhtml_function_coverage=1 00:38:44.910 --rc genhtml_legend=1 00:38:44.910 --rc geninfo_all_blocks=1 00:38:44.910 --rc geninfo_unexecuted_blocks=1 00:38:44.910 00:38:44.910 ' 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:44.910 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:44.911 01:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:46.811 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:46.812 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:46.812 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:46.812 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:46.812 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:46.812 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:47.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:47.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:38:47.072 00:38:47.072 --- 10.0.0.2 ping statistics --- 00:38:47.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.072 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:47.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:47.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:38:47.072 00:38:47.072 --- 10.0.0.1 ping statistics --- 00:38:47.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:47.072 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:47.072 only one NIC for nvmf test 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:47.072 rmmod nvme_tcp 00:38:47.072 rmmod nvme_fabrics 00:38:47.072 rmmod nvme_keyring 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:47.072 01:49:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:49.609 00:38:49.609 real 0m4.471s 00:38:49.609 user 0m0.903s 00:38:49.609 sys 0m1.589s 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:49.609 ************************************ 00:38:49.609 END TEST nvmf_target_multipath 00:38:49.609 ************************************ 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:49.609 ************************************ 00:38:49.609 START TEST nvmf_zcopy 00:38:49.609 ************************************ 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:49.609 * Looking for test storage... 00:38:49.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:49.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.609 --rc genhtml_branch_coverage=1 00:38:49.609 --rc genhtml_function_coverage=1 00:38:49.609 --rc genhtml_legend=1 00:38:49.609 --rc geninfo_all_blocks=1 00:38:49.609 --rc geninfo_unexecuted_blocks=1 00:38:49.609 00:38:49.609 ' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:49.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.609 --rc genhtml_branch_coverage=1 00:38:49.609 --rc genhtml_function_coverage=1 00:38:49.609 --rc genhtml_legend=1 00:38:49.609 --rc geninfo_all_blocks=1 00:38:49.609 --rc geninfo_unexecuted_blocks=1 00:38:49.609 00:38:49.609 ' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:49.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.609 --rc genhtml_branch_coverage=1 00:38:49.609 --rc genhtml_function_coverage=1 00:38:49.609 --rc genhtml_legend=1 00:38:49.609 --rc geninfo_all_blocks=1 00:38:49.609 --rc geninfo_unexecuted_blocks=1 00:38:49.609 00:38:49.609 ' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:49.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.609 --rc genhtml_branch_coverage=1 00:38:49.609 --rc genhtml_function_coverage=1 00:38:49.609 --rc genhtml_legend=1 00:38:49.609 --rc geninfo_all_blocks=1 00:38:49.609 --rc geninfo_unexecuted_blocks=1 00:38:49.609 00:38:49.609 ' 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:49.609 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:49.610 01:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:51.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:51.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.513 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:51.514 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:51.514 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:51.514 01:49:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:51.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:51.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:38:51.514 00:38:51.514 --- 10.0.0.2 ping statistics --- 00:38:51.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.514 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:51.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:51.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:38:51.514 00:38:51.514 --- 10.0.0.1 ping statistics --- 00:38:51.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:51.514 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1799655 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1799655 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1799655 ']' 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:51.514 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:51.774 [2024-10-13 01:49:37.111006] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:51.774 [2024-10-13 01:49:37.112117] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:51.774 [2024-10-13 01:49:37.112190] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:51.774 [2024-10-13 01:49:37.182135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.774 [2024-10-13 01:49:37.228898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:51.774 [2024-10-13 01:49:37.228968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:51.774 [2024-10-13 01:49:37.228996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:51.774 [2024-10-13 01:49:37.229011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:51.774 [2024-10-13 01:49:37.229023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:51.774 [2024-10-13 01:49:37.229657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:51.774 [2024-10-13 01:49:37.317939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:51.774 [2024-10-13 01:49:37.318291] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:51.774 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:51.774 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:38:51.774 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:38:51.774 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:51.774 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.033 [2024-10-13 01:49:37.378310] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.033 [2024-10-13 01:49:37.394481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:52.033 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.034 malloc0 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:38:52.034 { 00:38:52.034 "params": { 00:38:52.034 "name": "Nvme$subsystem", 00:38:52.034 "trtype": "$TEST_TRANSPORT", 00:38:52.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:52.034 "adrfam": "ipv4", 00:38:52.034 "trsvcid": "$NVMF_PORT", 00:38:52.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:52.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:52.034 "hdgst": ${hdgst:-false}, 00:38:52.034 "ddgst": ${ddgst:-false} 00:38:52.034 }, 00:38:52.034 "method": "bdev_nvme_attach_controller" 00:38:52.034 } 00:38:52.034 EOF 00:38:52.034 )") 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:38:52.034 01:49:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:38:52.034 "params": { 00:38:52.034 "name": "Nvme1", 00:38:52.034 "trtype": "tcp", 00:38:52.034 "traddr": "10.0.0.2", 00:38:52.034 "adrfam": "ipv4", 00:38:52.034 "trsvcid": "4420", 00:38:52.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:52.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:52.034 "hdgst": false, 00:38:52.034 "ddgst": false 00:38:52.034 }, 00:38:52.034 "method": "bdev_nvme_attach_controller" 00:38:52.034 }' 00:38:52.034 [2024-10-13 01:49:37.477028] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:38:52.034 [2024-10-13 01:49:37.477107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799681 ] 00:38:52.034 [2024-10-13 01:49:37.538670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.034 [2024-10-13 01:49:37.587710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.599 Running I/O for 10 seconds... 00:38:54.466 5092.00 IOPS, 39.78 MiB/s [2024-10-12T23:49:40.978Z] 5158.00 IOPS, 40.30 MiB/s [2024-10-12T23:49:42.354Z] 5283.00 IOPS, 41.27 MiB/s [2024-10-12T23:49:43.288Z] 5252.50 IOPS, 41.04 MiB/s [2024-10-12T23:49:44.222Z] 5298.60 IOPS, 41.40 MiB/s [2024-10-12T23:49:45.155Z] 5295.67 IOPS, 41.37 MiB/s [2024-10-12T23:49:46.087Z] 5277.71 IOPS, 41.23 MiB/s [2024-10-12T23:49:47.021Z] 5262.88 IOPS, 41.12 MiB/s [2024-10-12T23:49:47.955Z] 5250.00 IOPS, 41.02 MiB/s [2024-10-12T23:49:48.215Z] 5240.70 IOPS, 40.94 MiB/s 00:39:02.637 Latency(us) 00:39:02.637 [2024-10-12T23:49:48.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.637 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:02.637 Verification LBA range: start 0x0 length 0x1000 00:39:02.637 Nvme1n1 : 10.06 5223.36 40.81 0.00 0.00 24344.45 4417.61 45049.93 00:39:02.637 [2024-10-12T23:49:48.215Z] =================================================================================================================== 00:39:02.637 [2024-10-12T23:49:48.215Z] Total : 5223.36 40.81 0.00 0.00 24344.45 4417.61 45049.93 00:39:02.637 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1800860 00:39:02.637 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:02.637 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:02.637 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:02.638 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:02.638 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:39:02.638 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:39:02.638 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:02.638 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:02.638 { 00:39:02.638 "params": { 00:39:02.638 "name": "Nvme$subsystem", 00:39:02.638 "trtype": "$TEST_TRANSPORT", 00:39:02.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:02.638 "adrfam": "ipv4", 00:39:02.638 "trsvcid": "$NVMF_PORT", 00:39:02.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:02.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:02.638 "hdgst": ${hdgst:-false}, 00:39:02.638 "ddgst": ${ddgst:-false} 00:39:02.638 }, 00:39:02.638 "method": "bdev_nvme_attach_controller" 00:39:02.638 } 00:39:02.638 EOF 00:39:02.638 )") 00:39:02.638 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:39:02.638 [2024-10-13 01:49:48.214224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.638 [2024-10-13 01:49:48.214272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.898 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:39:02.898 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:39:02.898 01:49:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:02.898 "params": { 00:39:02.898 "name": "Nvme1", 00:39:02.898 "trtype": "tcp", 00:39:02.898 "traddr": "10.0.0.2", 00:39:02.898 "adrfam": "ipv4", 00:39:02.898 "trsvcid": "4420", 00:39:02.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:02.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:02.898 "hdgst": false, 00:39:02.898 "ddgst": false 00:39:02.898 }, 00:39:02.898 "method": "bdev_nvme_attach_controller" 00:39:02.898 }' 00:39:02.898 [2024-10-13 01:49:48.222151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.898 [2024-10-13 01:49:48.222179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.898 [2024-10-13 01:49:48.230133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.898 [2024-10-13 01:49:48.230154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.898 [2024-10-13 01:49:48.238148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.898 [2024-10-13 01:49:48.238173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.898 [2024-10-13 01:49:48.246147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.898 [2024-10-13 01:49:48.246171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.898 [2024-10-13 01:49:48.254148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.898 [2024-10-13 01:49:48.254172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.898 [2024-10-13 01:49:48.255964] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:39:02.899 [2024-10-13 01:49:48.256043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800860 ] 00:39:02.899 [2024-10-13 01:49:48.262148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.262173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.270149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.270173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.278147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.278171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.286148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.286173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.294149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.294173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.302149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.302174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.310148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.310172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.317714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.899 [2024-10-13 01:49:48.318150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.318175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.326185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.326227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.334183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.334223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.342148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.342173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.350154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.350182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.358148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.358173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.366148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.366172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.369153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.899 [2024-10-13 01:49:48.374147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.374172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.382148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.382174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.390178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.390214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.398183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.398220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.406183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.406223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.414184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.414226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.422187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.422227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.430187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.430228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.438153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.438179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.446165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.446198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.454184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.454223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.462185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.462240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:02.899 [2024-10-13 01:49:48.470148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:02.899 [2024-10-13 01:49:48.470184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.478163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.478188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.486157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.486198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.494228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.494257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.502154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.502181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.510155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.510184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.518148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.518173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.526149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.526177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.534147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.534171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.542148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.542173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.550151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.550177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.558153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.558180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.566151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.566178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.574148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.574174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.582147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.582171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.590147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.590171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.598147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.598172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.606155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.606184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.614148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.614180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.622148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.622172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.630147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.630172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.638147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.638173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.646153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.646179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.654150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.654177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.662149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.662174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.670148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.670173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.678148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.678173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.686149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.686174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.694149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.694175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.702158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.702188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 [2024-10-13 01:49:48.710155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.710183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.156 Running I/O for 5 seconds... 00:39:03.156 [2024-10-13 01:49:48.724144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.156 [2024-10-13 01:49:48.724176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.414 [2024-10-13 01:49:48.740048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.414 [2024-10-13 01:49:48.740079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.414 [2024-10-13 01:49:48.756234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.414 [2024-10-13 01:49:48.756265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.770976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.771008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.781540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.781585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.794636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.794664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.806644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.806671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.818660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.818688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.836437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.836476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.850582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.850610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.861030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.861060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.873931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.873962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.886365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.886396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.898486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.898531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.910422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.910453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.922071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.922101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.933819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.933850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.946091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.946122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.957642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.957669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.969635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.969662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.415 [2024-10-13 01:49:48.981859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.415 [2024-10-13 01:49:48.981889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:48.994138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:48.994168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.005986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.006017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.017952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.017982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.029987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.030018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.042067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.042097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.054179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.054209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.065912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.065943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.077721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.077765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.089732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.089776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.101488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.101532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.113378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.113409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.125821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.125851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.136053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.136082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.152405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.152435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.164708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.164735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.178677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.178703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.189643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.189670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.202611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.202638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.212568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.212595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.225842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.225871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.238260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.238291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.673 [2024-10-13 01:49:49.250099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.673 [2024-10-13 01:49:49.250130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.262273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.262303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.274071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.274098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.286633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.286660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.304148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.304178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.315288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.315317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.328442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.328481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.343907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.343938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.359289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.359319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.370112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.370141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.383499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.383540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.395616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.395657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.411512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.411539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.422128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.422159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.434301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.434330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.445880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.445910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.458004] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.458034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.469624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.469650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.481668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.481695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:03.937 [2024-10-13 01:49:49.496180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:03.937 [2024-10-13 01:49:49.496210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.512126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.512166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.527932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.527962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.544837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.544868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.555897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.555927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.571734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.571779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.587189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.587219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.597489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.597531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.610920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.610951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.627943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.627973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.644717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.644745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.655545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.655572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.671974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.672014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.686785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.686816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.697125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.697155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.710566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.710592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 10525.00 IOPS, 82.23 MiB/s [2024-10-12T23:49:49.821Z] [2024-10-13 01:49:49.721216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.721246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.734697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.734724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.746074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.746103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.758895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.758925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.776394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.776432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.243 [2024-10-13 01:49:49.787679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.243 [2024-10-13 01:49:49.787704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.803567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.803595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.813874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.813903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.827093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.827122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.843904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.843934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.860374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.860403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.874584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.874611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.885542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.885568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.898880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.898911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.915650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.915676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.931382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.931412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.942072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.942102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.955268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.955297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.967413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.967442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.979430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.979459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:49.996662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:49.996688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:50.008619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:50.008648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:50.021636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:50.021665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:50.034865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:50.034911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:50.045771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:50.045799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:50.059167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:50.059196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:50.071592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:50.071618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:50.086049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:50.086081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.526 [2024-10-13 01:49:50.096462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.526 [2024-10-13 01:49:50.096527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.109585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.109612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.121706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.121757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.134011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.134041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.145644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.145670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.157828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.157858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.169695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.169721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.181425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.181455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.193615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.193641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.205322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.205351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.217838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.217868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.229998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.230027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.242509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.242553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.259188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.259219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.269552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.269580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.282770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.282807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.295052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.295082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.307048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.307077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.325366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.325396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.337156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.337186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.349535] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.349563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:04.785 [2024-10-13 01:49:50.361808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:04.785 [2024-10-13 01:49:50.361835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.374078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.374108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.385634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.385661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.398282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.398312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.410453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.410498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.427886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.427916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.444442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.444483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.455232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.455262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.468418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.468448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.484498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.484541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.498926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.498956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.509223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.509253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.522232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.522261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.534380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.534412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.552834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.552865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.564749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.564795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.579776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.579807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.595828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.595859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.605917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.605947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.044 [2024-10-13 01:49:50.617566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.044 [2024-10-13 01:49:50.617594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.302 [2024-10-13 01:49:50.629972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.302 [2024-10-13 01:49:50.630002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.302 [2024-10-13 01:49:50.642000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.302 [2024-10-13 01:49:50.642030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.302 [2024-10-13 01:49:50.653935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.302 [2024-10-13 01:49:50.653965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.302 [2024-10-13 01:49:50.665867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.665898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.677804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.677834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.689915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.689945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.702329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.702360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.714601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.714627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 10483.00 IOPS, 81.90 MiB/s [2024-10-12T23:49:50.881Z] [2024-10-13 01:49:50.726009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.726038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.738969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.739000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.751225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.751265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.763398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.763428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.780590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.780618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.797218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.797249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.807737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.807782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.823916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.823947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.839729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.839774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.857013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.857047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.867294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.867324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.303 [2024-10-13 01:49:50.880444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.303 [2024-10-13 01:49:50.880487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:50.892428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:50.892459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:50.907679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:50.907707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:50.918083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:50.918113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:50.931236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:50.931266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:50.948611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:50.948637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:50.961842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:50.961873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:50.972486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:50.972530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:50.988732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:50.988775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:50.999215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:50.999246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.014460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.014525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.024925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.024955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.037767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.037799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.050228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.050257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.062846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.062880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.074352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.074382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.087787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.087817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.099782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.099813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.111812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.111842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.127270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.127300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.561 [2024-10-13 01:49:51.137909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.561 [2024-10-13 01:49:51.137954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.819 [2024-10-13 01:49:51.151034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.819 [2024-10-13 01:49:51.151064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.819 [2024-10-13 01:49:51.162392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.819 [2024-10-13 01:49:51.162423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.819 [2024-10-13 01:49:51.178110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.819 [2024-10-13 01:49:51.178141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.819 [2024-10-13 01:49:51.188652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.188679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.201765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.201796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.214419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.214451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.226593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.226619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.237660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.237687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.251188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.251229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.263337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.263367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.275656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.275684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.287646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.287683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.300003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.300034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.312394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.312425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.324581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.324607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.341374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.341405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.353386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.353417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.365381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.365412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.377302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.377332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:05.820 [2024-10-13 01:49:51.389078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:05.820 [2024-10-13 01:49:51.389108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.401158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.401189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.416001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.416032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.431729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.431771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.442312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.442342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.455420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.455452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.467391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.467422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.479037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.479067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.494797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.494836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.505482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.505526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.518261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.518291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.529907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.529937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.542076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.542106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.553927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.553956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.566134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.566163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.577932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.577963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.589838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.589869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.601882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.601912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.613682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.613707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.625400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.625429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.637613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.637640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.078 [2024-10-13 01:49:51.650384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.078 [2024-10-13 01:49:51.650414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.661863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.661893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.673885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.673914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.685925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.685956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.698716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.698744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.709558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.709584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.722260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.722290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 10507.33 IOPS, 82.09 MiB/s [2024-10-12T23:49:51.915Z] [2024-10-13 01:49:51.734438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.734468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.745914] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.745944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.757743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.757788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.769989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.770019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.782013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.782042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.794109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.794138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.806489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.806535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.818716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.818741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.830132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.830162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.843655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.337 [2024-10-13 01:49:51.843682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.337 [2024-10-13 01:49:51.855215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.338 [2024-10-13 01:49:51.855244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.338 [2024-10-13 01:49:51.867517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.338 [2024-10-13 01:49:51.867545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.338 [2024-10-13 01:49:51.879354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.338 [2024-10-13 01:49:51.879384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.338 [2024-10-13 01:49:51.896670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.338 [2024-10-13 01:49:51.896697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.338 [2024-10-13 01:49:51.907740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.338 [2024-10-13 01:49:51.907795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:51.924538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:51.924565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:51.936322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:51.936351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:51.950639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:51.950667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:51.960806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:51.960851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:51.974390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:51.974420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:51.987112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:51.987143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.003679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.003707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.018331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.018360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.029417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.029447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.042687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.042713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.053592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.053618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.066642] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.066667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.077484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.077528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.090579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.090605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.107708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.107734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.118253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.118283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.130977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.131007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.143691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.143718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.155345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.155375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.596 [2024-10-13 01:49:52.167236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.596 [2024-10-13 01:49:52.167266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.179163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.179193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.190701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.190727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.203363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.203392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.214863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.214892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.231596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.231624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.242908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.242939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.256105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.256136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.268171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.268198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.280800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.280829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.295342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.295374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.305322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.305352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.318370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.318400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.330438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.330482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.342290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.342319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.354637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.354664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.372083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.372114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.388099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.388129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.401687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.401714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.412030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.412060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:06.855 [2024-10-13 01:49:52.427853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:06.855 [2024-10-13 01:49:52.427883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.441820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.441859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.452152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.452182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.467883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.467913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.483013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.483042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.493140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.493170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.506023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.506053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.517891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.517921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.529539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.529566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.541649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.541676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.554356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.554386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.566624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.566651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.577258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.577288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.590382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.590412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.607910] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.113 [2024-10-13 01:49:52.607941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.113 [2024-10-13 01:49:52.624269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.114 [2024-10-13 01:49:52.624299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.114 [2024-10-13 01:49:52.640145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.114 [2024-10-13 01:49:52.640175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.114 [2024-10-13 01:49:52.656187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.114 [2024-10-13 01:49:52.656216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.114 [2024-10-13 01:49:52.670488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.114 [2024-10-13 01:49:52.670518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.114 [2024-10-13 01:49:52.681388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.114 [2024-10-13 01:49:52.681418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.694275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.694319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.706608] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.706636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.723385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.723415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 10503.25 IOPS, 82.06 MiB/s [2024-10-12T23:49:52.950Z] [2024-10-13 01:49:52.734261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.734291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.747554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.747581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.764855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.764885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.775157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.775188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.788531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.788558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.804563] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.804591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.815512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.815539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.831783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.831814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.848674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.848702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.859361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.859391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.872184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.872214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.887864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.887894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.902517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.902544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.913102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.913132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.925930] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.925960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.938230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.938261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.372 [2024-10-13 01:49:52.949986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.372 [2024-10-13 01:49:52.950029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:52.962101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:52.962131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:52.974230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:52.974261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:52.986104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:52.986135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:52.997916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:52.997948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.009796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.009828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.021678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.021705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.033346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.033376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.047174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.047204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.057108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.057150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.069932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.069962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.082358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.082389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.093765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.093795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.105880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.105911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.117732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.117778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.129668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.129694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.141629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.141656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.153742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.153787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.165456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.165497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.177358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.177388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.191289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.191320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.631 [2024-10-13 01:49:53.201339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.631 [2024-10-13 01:49:53.201371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.889 [2024-10-13 01:49:53.214430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.889 [2024-10-13 01:49:53.214460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.889 [2024-10-13 01:49:53.231665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.889 [2024-10-13 01:49:53.231692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.889 [2024-10-13 01:49:53.248198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.889 [2024-10-13 01:49:53.248229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.889 [2024-10-13 01:49:53.261868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.889 [2024-10-13 01:49:53.261899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.889 [2024-10-13 01:49:53.271722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.889 [2024-10-13 01:49:53.271749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.288035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.288065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.304187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.304218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.320118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.320149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.334737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.334782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.345022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.345052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.359025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.359056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.375356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.375386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.386220] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.386251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.397967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.397998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.409860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.409891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.422048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.422081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.434356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.434386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.451737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.451779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:07.890 [2024-10-13 01:49:53.462251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:07.890 [2024-10-13 01:49:53.462280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.473828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.473860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.485315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.485345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.497455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.497494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.509009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.509039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.523158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.523188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.533340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.533371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.546697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.546724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.557359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.557389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.570693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.570720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.581843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.581873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.594498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.594541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.612579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.612605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.627996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.628028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.641699] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.641727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.652045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.652074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.668132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.668162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.681895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.681925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.692232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.692262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.705548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.705589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.148 [2024-10-13 01:49:53.720132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.148 [2024-10-13 01:49:53.720162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.407 10532.40 IOPS, 82.28 MiB/s [2024-10-12T23:49:53.985Z] [2024-10-13 01:49:53.734768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.407 [2024-10-13 01:49:53.734799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.407 [2024-10-13 01:49:53.742156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.407 [2024-10-13 01:49:53.742185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.407 00:39:08.407 Latency(us) 00:39:08.407 [2024-10-12T23:49:53.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.407 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:08.407 Nvme1n1 : 5.01 10532.07 82.28 0.00 0.00 12135.06 2961.26 19418.07 00:39:08.407 [2024-10-12T23:49:53.985Z] =================================================================================================================== 00:39:08.407 [2024-10-12T23:49:53.985Z] Total : 10532.07 82.28 0.00 0.00 12135.06 2961.26 19418.07 00:39:08.407 [2024-10-13 01:49:53.750214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.407 [2024-10-13 01:49:53.750243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.407 [2024-10-13 01:49:53.758144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.407 [2024-10-13 01:49:53.758171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.407 [2024-10-13 01:49:53.766202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.407 [2024-10-13 01:49:53.766258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.407 [2024-10-13 01:49:53.774213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.407 [2024-10-13 01:49:53.774262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.407 [2024-10-13 01:49:53.782213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.407 [2024-10-13 01:49:53.782262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.407 [2024-10-13 01:49:53.790210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.407 [2024-10-13 01:49:53.790257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.407 [2024-10-13 01:49:53.798210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.798259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.806211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.806259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.814208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.814257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.822216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.822277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.830217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.830267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.838205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.838251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.846211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.846259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.854211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.854261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.862210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.862258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.870193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.870236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.878155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.878183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.886159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.886190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.894207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.894253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.902208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.902255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.910148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.910172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.918147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.918170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 [2024-10-13 01:49:53.926149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:08.408 [2024-10-13 01:49:53.926173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:08.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1800860) - No such process 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1800860 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:08.408 delay0 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:08.408 01:49:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:08.666 [2024-10-13 01:49:53.998402] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:16.819 Initializing NVMe Controllers 00:39:16.819 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:16.819 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:16.819 Initialization complete. Launching workers. 00:39:16.819 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 227, failed: 25966 00:39:16.819 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26043, failed to submit 150 00:39:16.819 success 25968, unsuccessful 75, failed 0 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:16.819 rmmod nvme_tcp 00:39:16.819 rmmod nvme_fabrics 00:39:16.819 rmmod nvme_keyring 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1799655 ']' 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1799655 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1799655 ']' 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1799655 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1799655 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1799655' 00:39:16.819 killing process with pid 1799655 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1799655 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1799655 00:39:16.819 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:16.820 01:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:18.197 00:39:18.197 real 0m28.778s 00:39:18.197 user 0m40.703s 00:39:18.197 sys 0m10.391s 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:18.197 ************************************ 00:39:18.197 END TEST nvmf_zcopy 00:39:18.197 ************************************ 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:18.197 ************************************ 00:39:18.197 START TEST nvmf_nmic 00:39:18.197 ************************************ 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:18.197 * Looking for test storage... 00:39:18.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:18.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.197 --rc genhtml_branch_coverage=1 00:39:18.197 --rc genhtml_function_coverage=1 00:39:18.197 --rc genhtml_legend=1 00:39:18.197 --rc geninfo_all_blocks=1 00:39:18.197 --rc geninfo_unexecuted_blocks=1 00:39:18.197 00:39:18.197 ' 00:39:18.197 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:18.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.197 --rc genhtml_branch_coverage=1 00:39:18.197 --rc genhtml_function_coverage=1 00:39:18.197 --rc genhtml_legend=1 00:39:18.197 --rc geninfo_all_blocks=1 00:39:18.197 --rc geninfo_unexecuted_blocks=1 00:39:18.197 00:39:18.197 ' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:18.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.198 --rc genhtml_branch_coverage=1 00:39:18.198 --rc genhtml_function_coverage=1 00:39:18.198 --rc genhtml_legend=1 00:39:18.198 --rc geninfo_all_blocks=1 00:39:18.198 --rc geninfo_unexecuted_blocks=1 00:39:18.198 00:39:18.198 ' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:18.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:18.198 --rc genhtml_branch_coverage=1 00:39:18.198 --rc genhtml_function_coverage=1 00:39:18.198 --rc genhtml_legend=1 00:39:18.198 --rc geninfo_all_blocks=1 00:39:18.198 --rc geninfo_unexecuted_blocks=1 00:39:18.198 00:39:18.198 ' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:18.198 01:50:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:20.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:20.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:20.100 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:20.101 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:20.101 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:20.101 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:20.359 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:20.359 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:20.359 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:20.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:20.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:39:20.360 00:39:20.360 --- 10.0.0.2 ping statistics --- 00:39:20.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.360 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:20.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:20.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:39:20.360 00:39:20.360 --- 10.0.0.1 ping statistics --- 00:39:20.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:20.360 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1804353 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1804353 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1804353 ']' 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:20.360 01:50:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.360 [2024-10-13 01:50:05.840883] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:20.360 [2024-10-13 01:50:05.841970] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:39:20.360 [2024-10-13 01:50:05.842034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:20.360 [2024-10-13 01:50:05.914024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:20.618 [2024-10-13 01:50:05.965941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:20.618 [2024-10-13 01:50:05.965999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:20.618 [2024-10-13 01:50:05.966014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:20.618 [2024-10-13 01:50:05.966025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:20.618 [2024-10-13 01:50:05.966035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:20.618 [2024-10-13 01:50:05.967691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:20.618 [2024-10-13 01:50:05.967760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:20.618 [2024-10-13 01:50:05.967763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:20.618 [2024-10-13 01:50:05.967718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:20.618 [2024-10-13 01:50:06.061416] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:20.618 [2024-10-13 01:50:06.061619] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:20.618 [2024-10-13 01:50:06.061904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:20.618 [2024-10-13 01:50:06.062548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:20.618 [2024-10-13 01:50:06.062784] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.618 [2024-10-13 01:50:06.116449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.618 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.619 Malloc0 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.619 [2024-10-13 01:50:06.180715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:20.619 test case1: single bdev can't be used in multiple subsystems 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.619 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.877 [2024-10-13 01:50:06.204401] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:20.877 [2024-10-13 01:50:06.204432] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:20.877 [2024-10-13 01:50:06.204462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:20.877 request: 00:39:20.877 { 00:39:20.877 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:20.877 "namespace": { 00:39:20.877 "bdev_name": "Malloc0", 00:39:20.877 "no_auto_visible": false 00:39:20.877 }, 00:39:20.877 "method": "nvmf_subsystem_add_ns", 00:39:20.877 "req_id": 1 00:39:20.877 } 00:39:20.877 Got JSON-RPC error response 00:39:20.877 response: 00:39:20.877 { 00:39:20.877 "code": -32602, 00:39:20.877 "message": "Invalid parameters" 00:39:20.877 } 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:20.877 Adding namespace failed - expected result. 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:20.877 test case2: host connect to nvmf target in multiple paths 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:20.877 [2024-10-13 01:50:06.212512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:20.877 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:21.135 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:21.135 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:39:21.136 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:21.136 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:39:21.136 01:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:39:23.664 01:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:23.664 01:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:23.664 01:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:23.664 01:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:39:23.664 01:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:23.664 01:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:39:23.664 01:50:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:23.664 [global] 00:39:23.664 thread=1 00:39:23.664 invalidate=1 00:39:23.664 rw=write 00:39:23.664 time_based=1 00:39:23.664 runtime=1 00:39:23.664 ioengine=libaio 00:39:23.664 direct=1 00:39:23.664 bs=4096 00:39:23.664 iodepth=1 00:39:23.664 norandommap=0 00:39:23.664 numjobs=1 00:39:23.664 00:39:23.664 verify_dump=1 00:39:23.664 verify_backlog=512 00:39:23.664 verify_state_save=0 00:39:23.664 do_verify=1 00:39:23.664 verify=crc32c-intel 00:39:23.664 [job0] 00:39:23.664 filename=/dev/nvme0n1 00:39:23.664 Could not set queue depth (nvme0n1) 00:39:23.664 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:23.664 fio-3.35 00:39:23.664 Starting 1 thread 00:39:24.596 00:39:24.596 job0: (groupid=0, jobs=1): err= 0: pid=1804744: Sun Oct 13 01:50:10 2024 00:39:24.596 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:39:24.596 slat (nsec): min=6575, max=34140, avg=20561.91, stdev=9537.84 00:39:24.596 clat (usec): min=40502, max=42101, avg=41390.73, stdev=530.97 00:39:24.596 lat (usec): min=40509, max=42117, avg=41411.29, stdev=535.06 00:39:24.596 clat percentiles (usec): 00:39:24.596 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:24.596 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:39:24.596 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:39:24.596 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:24.596 | 99.99th=[42206] 00:39:24.596 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:39:24.596 slat (nsec): min=6745, max=30089, avg=7825.86, stdev=2149.57 00:39:24.596 clat (usec): min=141, max=267, avg=156.08, stdev=11.72 00:39:24.596 lat (usec): min=148, max=296, avg=163.90, stdev=12.21 00:39:24.596 clat percentiles (usec): 00:39:24.596 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 147], 20.00th=[ 149], 00:39:24.596 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 153], 60.00th=[ 155], 00:39:24.596 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 172], 00:39:24.596 | 99.00th=[ 192], 99.50th=[ 243], 99.90th=[ 269], 99.95th=[ 269], 00:39:24.596 | 99.99th=[ 269] 00:39:24.596 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:24.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:24.596 lat (usec) : 250=95.33%, 500=0.37% 00:39:24.596 lat (msec) : 50=4.30% 00:39:24.596 cpu : usr=0.29%, sys=0.48%, ctx=535, majf=0, minf=1 00:39:24.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:24.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:24.596 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:24.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:24.596 00:39:24.596 Run status group 0 (all jobs): 00:39:24.596 READ: bw=88.6KiB/s (90.8kB/s), 88.6KiB/s-88.6KiB/s (90.8kB/s-90.8kB/s), io=92.0KiB (94.2kB), run=1038-1038msec 00:39:24.596 WRITE: bw=1973KiB/s (2020kB/s), 1973KiB/s-1973KiB/s (2020kB/s-2020kB/s), io=2048KiB (2097kB), run=1038-1038msec 00:39:24.596 00:39:24.596 Disk stats (read/write): 00:39:24.596 nvme0n1: ios=69/512, merge=0/0, ticks=815/79, in_queue=894, util=91.58% 00:39:24.596 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:24.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:24.596 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:24.596 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:39:24.596 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:24.596 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:24.596 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:24.596 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:24.597 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:39:24.597 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:24.597 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:24.597 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:24.597 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:24.597 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:24.597 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:24.597 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:24.597 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:24.597 rmmod nvme_tcp 00:39:24.597 rmmod nvme_fabrics 00:39:24.597 rmmod nvme_keyring 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1804353 ']' 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1804353 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1804353 ']' 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1804353 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1804353 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1804353' 00:39:24.855 killing process with pid 1804353 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1804353 00:39:24.855 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1804353 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:25.114 01:50:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:27.020 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:27.020 00:39:27.020 real 0m8.974s 00:39:27.020 user 0m16.821s 00:39:27.020 sys 0m3.292s 00:39:27.020 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:27.020 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:27.020 ************************************ 00:39:27.020 END TEST nvmf_nmic 00:39:27.020 ************************************ 00:39:27.020 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:27.020 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:27.020 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:27.020 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:27.020 ************************************ 00:39:27.020 START TEST nvmf_fio_target 00:39:27.020 ************************************ 00:39:27.020 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:27.280 * Looking for test storage... 00:39:27.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:27.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:27.280 --rc genhtml_branch_coverage=1 00:39:27.280 --rc genhtml_function_coverage=1 00:39:27.280 --rc genhtml_legend=1 00:39:27.280 --rc geninfo_all_blocks=1 00:39:27.280 --rc geninfo_unexecuted_blocks=1 00:39:27.280 00:39:27.280 ' 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:27.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:27.280 --rc genhtml_branch_coverage=1 00:39:27.280 --rc genhtml_function_coverage=1 00:39:27.280 --rc genhtml_legend=1 00:39:27.280 --rc geninfo_all_blocks=1 00:39:27.280 --rc geninfo_unexecuted_blocks=1 00:39:27.280 00:39:27.280 ' 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:27.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:27.280 --rc genhtml_branch_coverage=1 00:39:27.280 --rc genhtml_function_coverage=1 00:39:27.280 --rc genhtml_legend=1 00:39:27.280 --rc geninfo_all_blocks=1 00:39:27.280 --rc geninfo_unexecuted_blocks=1 00:39:27.280 00:39:27.280 ' 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:27.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:27.280 --rc genhtml_branch_coverage=1 00:39:27.280 --rc genhtml_function_coverage=1 00:39:27.280 --rc genhtml_legend=1 00:39:27.280 --rc geninfo_all_blocks=1 00:39:27.280 --rc geninfo_unexecuted_blocks=1 00:39:27.280 00:39:27.280 ' 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:27.280 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:27.281 01:50:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:29.182 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:29.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:29.182 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:29.182 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:29.182 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:29.183 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:29.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:29.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:39:29.440 00:39:29.440 --- 10.0.0.2 ping statistics --- 00:39:29.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.440 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:29.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:29.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:39:29.440 00:39:29.440 --- 10.0.0.1 ping statistics --- 00:39:29.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.440 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1806822 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1806822 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1806822 ']' 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:29.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:29.440 01:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:29.440 [2024-10-13 01:50:14.857399] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:29.440 [2024-10-13 01:50:14.858495] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:39:29.440 [2024-10-13 01:50:14.858558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:29.440 [2024-10-13 01:50:14.922697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:29.440 [2024-10-13 01:50:14.970854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:29.440 [2024-10-13 01:50:14.970912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:29.440 [2024-10-13 01:50:14.970927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:29.440 [2024-10-13 01:50:14.970939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:29.440 [2024-10-13 01:50:14.970948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:29.440 [2024-10-13 01:50:14.972595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:29.440 [2024-10-13 01:50:14.972657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:29.440 [2024-10-13 01:50:14.972713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:29.440 [2024-10-13 01:50:14.972716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.699 [2024-10-13 01:50:15.057102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:29.699 [2024-10-13 01:50:15.057307] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:29.699 [2024-10-13 01:50:15.057648] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:29.699 [2024-10-13 01:50:15.058109] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:29.699 [2024-10-13 01:50:15.058335] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:29.699 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:29.699 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:39:29.699 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:29.699 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:29.699 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:29.699 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:29.699 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:29.957 [2024-10-13 01:50:15.417393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:29.957 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:30.214 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:30.215 01:50:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:30.781 01:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:30.781 01:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:31.038 01:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:31.038 01:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:31.297 01:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:31.297 01:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:31.555 01:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:31.813 01:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:31.813 01:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:32.070 01:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:32.070 01:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:32.328 01:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:32.328 01:50:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:32.590 01:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:32.851 01:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:32.851 01:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:33.109 01:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:33.109 01:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:33.367 01:50:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:33.625 [2024-10-13 01:50:19.181530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:33.625 01:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:34.190 01:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:34.191 01:50:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:34.449 01:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:34.449 01:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:39:34.449 01:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:39:34.449 01:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:39:34.449 01:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:39:34.449 01:50:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:39:37.023 01:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:39:37.023 01:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:39:37.023 01:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:39:37.023 01:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:39:37.023 01:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:39:37.023 01:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:39:37.023 01:50:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:37.023 [global] 00:39:37.023 thread=1 00:39:37.023 invalidate=1 00:39:37.023 rw=write 00:39:37.023 time_based=1 00:39:37.023 runtime=1 00:39:37.023 ioengine=libaio 00:39:37.023 direct=1 00:39:37.023 bs=4096 00:39:37.023 iodepth=1 00:39:37.023 norandommap=0 00:39:37.023 numjobs=1 00:39:37.023 00:39:37.023 verify_dump=1 00:39:37.023 verify_backlog=512 00:39:37.023 verify_state_save=0 00:39:37.023 do_verify=1 00:39:37.023 verify=crc32c-intel 00:39:37.023 [job0] 00:39:37.023 filename=/dev/nvme0n1 00:39:37.023 [job1] 00:39:37.023 filename=/dev/nvme0n2 00:39:37.023 [job2] 00:39:37.023 filename=/dev/nvme0n3 00:39:37.023 [job3] 00:39:37.023 filename=/dev/nvme0n4 00:39:37.023 Could not set queue depth (nvme0n1) 00:39:37.023 Could not set queue depth (nvme0n2) 00:39:37.023 Could not set queue depth (nvme0n3) 00:39:37.023 Could not set queue depth (nvme0n4) 00:39:37.023 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.023 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.023 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.023 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.023 fio-3.35 00:39:37.023 Starting 4 threads 00:39:37.956 00:39:37.956 job0: (groupid=0, jobs=1): err= 0: pid=1807881: Sun Oct 13 01:50:23 2024 00:39:37.956 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:39:37.956 slat (nsec): min=6709, max=33462, avg=21781.70, stdev=10403.71 00:39:37.956 clat (usec): min=290, max=41034, avg=39165.84, stdev=8475.67 00:39:37.956 lat (usec): min=307, max=41067, avg=39187.62, stdev=8476.78 00:39:37.956 clat percentiles (usec): 00:39:37.956 | 1.00th=[ 289], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:39:37.956 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:37.956 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:37.956 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:37.956 | 99.99th=[41157] 00:39:37.956 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:39:37.956 slat (nsec): min=6123, max=49202, avg=7750.12, stdev=2834.55 00:39:37.956 clat (usec): min=156, max=401, avg=245.89, stdev=16.90 00:39:37.956 lat (usec): min=164, max=410, avg=253.64, stdev=16.68 00:39:37.956 clat percentiles (usec): 00:39:37.956 | 1.00th=[ 206], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 243], 00:39:37.956 | 30.00th=[ 245], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:39:37.956 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 255], 00:39:37.956 | 99.00th=[ 318], 99.50th=[ 388], 99.90th=[ 400], 99.95th=[ 400], 00:39:37.956 | 99.99th=[ 400] 00:39:37.956 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:39:37.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:37.956 lat (usec) : 250=84.30%, 500=11.59% 00:39:37.956 lat (msec) : 50=4.11% 00:39:37.956 cpu : usr=0.39%, sys=0.10%, ctx=535, majf=0, minf=1 00:39:37.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:37.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.956 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:37.956 job1: (groupid=0, jobs=1): err= 0: pid=1807882: Sun Oct 13 01:50:23 2024 00:39:37.956 read: IOPS=129, BW=517KiB/s (529kB/s)(532KiB/1029msec) 00:39:37.956 slat (nsec): min=5447, max=34287, avg=8747.48, stdev=7239.49 00:39:37.956 clat (usec): min=270, max=44005, avg=6752.67, stdev=14945.31 00:39:37.956 lat (usec): min=276, max=44022, avg=6761.41, stdev=14951.05 00:39:37.956 clat percentiles (usec): 00:39:37.956 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 277], 20.00th=[ 281], 00:39:37.956 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:39:37.956 | 70.00th=[ 310], 80.00th=[ 420], 90.00th=[41157], 95.00th=[41157], 00:39:37.956 | 99.00th=[41157], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:39:37.956 | 99.99th=[43779] 00:39:37.956 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:39:37.956 slat (nsec): min=5531, max=35556, avg=7941.49, stdev=2649.21 00:39:37.956 clat (usec): min=159, max=408, avg=242.97, stdev=26.99 00:39:37.956 lat (usec): min=165, max=444, avg=250.91, stdev=26.68 00:39:37.956 clat percentiles (usec): 00:39:37.956 | 1.00th=[ 169], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 227], 00:39:37.956 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:39:37.956 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 289], 00:39:37.956 | 99.00th=[ 375], 99.50th=[ 379], 99.90th=[ 408], 99.95th=[ 408], 00:39:37.956 | 99.99th=[ 408] 00:39:37.956 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:39:37.956 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:37.956 lat (usec) : 250=56.12%, 500=40.47%, 750=0.16% 00:39:37.956 lat (msec) : 50=3.26% 00:39:37.956 cpu : usr=0.39%, sys=0.39%, ctx=645, majf=0, minf=2 00:39:37.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:37.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.956 issued rwts: total=133,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:37.957 job2: (groupid=0, jobs=1): err= 0: pid=1807885: Sun Oct 13 01:50:23 2024 00:39:37.957 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:39:37.957 slat (nsec): min=9665, max=36934, avg=24083.68, stdev=11088.46 00:39:37.957 clat (usec): min=40907, max=41050, avg=40968.10, stdev=37.84 00:39:37.957 lat (usec): min=40940, max=41059, avg=40992.18, stdev=33.49 00:39:37.957 clat percentiles (usec): 00:39:37.957 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:37.957 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:37.957 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:37.957 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:37.957 | 99.99th=[41157] 00:39:37.957 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:39:37.957 slat (nsec): min=7832, max=48089, avg=10340.64, stdev=3912.94 00:39:37.957 clat (usec): min=160, max=467, avg=239.54, stdev=49.29 00:39:37.957 lat (usec): min=169, max=480, avg=249.88, stdev=49.79 00:39:37.957 clat percentiles (usec): 00:39:37.957 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 200], 00:39:37.957 | 30.00th=[ 210], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 243], 00:39:37.957 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 334], 00:39:37.957 | 99.00th=[ 408], 99.50th=[ 441], 99.90th=[ 469], 99.95th=[ 469], 00:39:37.957 | 99.99th=[ 469] 00:39:37.957 bw ( KiB/s): min= 4096, max= 4096, per=29.51%, avg=4096.00, stdev= 0.00, samples=1 00:39:37.957 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:37.957 lat (usec) : 250=66.48%, 500=29.40% 00:39:37.957 lat (msec) : 50=4.12% 00:39:37.957 cpu : usr=0.58%, sys=0.48%, ctx=534, majf=0, minf=1 00:39:37.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:37.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.957 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:37.957 job3: (groupid=0, jobs=1): err= 0: pid=1807886: Sun Oct 13 01:50:23 2024 00:39:37.957 read: IOPS=1714, BW=6857KiB/s (7022kB/s)(6864KiB/1001msec) 00:39:37.957 slat (nsec): min=5824, max=52983, avg=12301.60, stdev=6558.70 00:39:37.957 clat (usec): min=216, max=1135, avg=305.24, stdev=91.66 00:39:37.957 lat (usec): min=223, max=1144, avg=317.54, stdev=92.43 00:39:37.957 clat percentiles (usec): 00:39:37.957 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 249], 00:39:37.957 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:39:37.957 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 441], 95.00th=[ 515], 00:39:37.957 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[ 1004], 99.95th=[ 1139], 00:39:37.957 | 99.99th=[ 1139] 00:39:37.957 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:37.957 slat (nsec): min=6584, max=52579, avg=15204.39, stdev=7587.73 00:39:37.957 clat (usec): min=143, max=398, avg=198.57, stdev=33.88 00:39:37.957 lat (usec): min=152, max=410, avg=213.78, stdev=36.09 00:39:37.957 clat percentiles (usec): 00:39:37.957 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:39:37.957 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 198], 00:39:37.957 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 245], 95.00th=[ 269], 00:39:37.957 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 396], 99.95th=[ 400], 00:39:37.957 | 99.99th=[ 400] 00:39:37.957 bw ( KiB/s): min= 8192, max= 8192, per=59.03%, avg=8192.00, stdev= 0.00, samples=1 00:39:37.957 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:37.957 lat (usec) : 250=59.80%, 500=37.38%, 750=2.66%, 1000=0.11% 00:39:37.957 lat (msec) : 2=0.05% 00:39:37.957 cpu : usr=3.70%, sys=6.80%, ctx=3766, majf=0, minf=1 00:39:37.957 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:37.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.957 issued rwts: total=1716,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.957 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:37.957 00:39:37.957 Run status group 0 (all jobs): 00:39:37.957 READ: bw=7334KiB/s (7510kB/s), 85.3KiB/s-6857KiB/s (87.3kB/s-7022kB/s), io=7576KiB (7758kB), run=1001-1033msec 00:39:37.957 WRITE: bw=13.6MiB/s (14.2MB/s), 1983KiB/s-8184KiB/s (2030kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1033msec 00:39:37.957 00:39:37.957 Disk stats (read/write): 00:39:37.957 nvme0n1: ios=67/512, merge=0/0, ticks=767/124, in_queue=891, util=88.28% 00:39:37.957 nvme0n2: ios=128/512, merge=0/0, ticks=694/124, in_queue=818, util=84.70% 00:39:37.957 nvme0n3: ios=16/512, merge=0/0, ticks=656/113, in_queue=769, util=88.53% 00:39:37.957 nvme0n4: ios=1583/1536, merge=0/0, ticks=783/279, in_queue=1062, util=99.35% 00:39:37.957 01:50:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:38.215 [global] 00:39:38.215 thread=1 00:39:38.215 invalidate=1 00:39:38.215 rw=randwrite 00:39:38.215 time_based=1 00:39:38.215 runtime=1 00:39:38.215 ioengine=libaio 00:39:38.215 direct=1 00:39:38.215 bs=4096 00:39:38.215 iodepth=1 00:39:38.215 norandommap=0 00:39:38.215 numjobs=1 00:39:38.215 00:39:38.215 verify_dump=1 00:39:38.215 verify_backlog=512 00:39:38.215 verify_state_save=0 00:39:38.215 do_verify=1 00:39:38.215 verify=crc32c-intel 00:39:38.215 [job0] 00:39:38.215 filename=/dev/nvme0n1 00:39:38.215 [job1] 00:39:38.215 filename=/dev/nvme0n2 00:39:38.215 [job2] 00:39:38.215 filename=/dev/nvme0n3 00:39:38.215 [job3] 00:39:38.215 filename=/dev/nvme0n4 00:39:38.215 Could not set queue depth (nvme0n1) 00:39:38.215 Could not set queue depth (nvme0n2) 00:39:38.215 Could not set queue depth (nvme0n3) 00:39:38.215 Could not set queue depth (nvme0n4) 00:39:38.215 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.215 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.215 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.215 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:38.215 fio-3.35 00:39:38.215 Starting 4 threads 00:39:39.590 00:39:39.590 job0: (groupid=0, jobs=1): err= 0: pid=1808115: Sun Oct 13 01:50:24 2024 00:39:39.590 read: IOPS=1584, BW=6338KiB/s (6490kB/s)(6344KiB/1001msec) 00:39:39.590 slat (nsec): min=3960, max=37645, avg=9888.85, stdev=4489.65 00:39:39.590 clat (usec): min=216, max=40460, avg=348.76, stdev=1013.18 00:39:39.590 lat (usec): min=222, max=40474, avg=358.65, stdev=1013.39 00:39:39.590 clat percentiles (usec): 00:39:39.590 | 1.00th=[ 227], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 262], 00:39:39.590 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 326], 00:39:39.590 | 70.00th=[ 363], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 412], 00:39:39.590 | 99.00th=[ 519], 99.50th=[ 586], 99.90th=[ 2278], 99.95th=[40633], 00:39:39.590 | 99.99th=[40633] 00:39:39.590 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:39.590 slat (nsec): min=6001, max=33531, avg=8462.54, stdev=3431.58 00:39:39.590 clat (usec): min=146, max=2077, avg=197.67, stdev=59.77 00:39:39.590 lat (usec): min=153, max=2083, avg=206.13, stdev=60.39 00:39:39.590 clat percentiles (usec): 00:39:39.590 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 169], 00:39:39.590 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 198], 00:39:39.590 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 260], 00:39:39.590 | 99.00th=[ 371], 99.50th=[ 371], 99.90th=[ 709], 99.95th=[ 971], 00:39:39.590 | 99.99th=[ 2073] 00:39:39.590 bw ( KiB/s): min= 8192, max= 8192, per=30.89%, avg=8192.00, stdev= 0.00, samples=1 00:39:39.590 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:39.590 lat (usec) : 250=57.84%, 500=41.44%, 750=0.52%, 1000=0.03% 00:39:39.590 lat (msec) : 2=0.03%, 4=0.11%, 50=0.03% 00:39:39.590 cpu : usr=1.90%, sys=3.20%, ctx=3635, majf=0, minf=1 00:39:39.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.590 issued rwts: total=1586,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.590 job1: (groupid=0, jobs=1): err= 0: pid=1808116: Sun Oct 13 01:50:24 2024 00:39:39.590 read: IOPS=1851, BW=7405KiB/s (7582kB/s)(7412KiB/1001msec) 00:39:39.590 slat (nsec): min=4427, max=38539, avg=7824.35, stdev=3988.12 00:39:39.590 clat (usec): min=220, max=585, avg=293.39, stdev=79.66 00:39:39.590 lat (usec): min=227, max=612, avg=301.22, stdev=81.67 00:39:39.590 clat percentiles (usec): 00:39:39.590 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:39:39.590 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:39:39.590 | 70.00th=[ 277], 80.00th=[ 306], 90.00th=[ 465], 95.00th=[ 502], 00:39:39.590 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 578], 99.95th=[ 586], 00:39:39.590 | 99.99th=[ 586] 00:39:39.590 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:39:39.590 slat (nsec): min=6191, max=39881, avg=9134.17, stdev=4281.72 00:39:39.590 clat (usec): min=154, max=474, avg=202.01, stdev=28.67 00:39:39.590 lat (usec): min=160, max=485, avg=211.15, stdev=30.09 00:39:39.590 clat percentiles (usec): 00:39:39.590 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:39:39.590 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:39:39.590 | 70.00th=[ 206], 80.00th=[ 221], 90.00th=[ 243], 95.00th=[ 253], 00:39:39.590 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 351], 99.95th=[ 396], 00:39:39.590 | 99.99th=[ 474] 00:39:39.590 bw ( KiB/s): min= 8192, max= 8192, per=30.89%, avg=8192.00, stdev= 0.00, samples=1 00:39:39.590 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:39.590 lat (usec) : 250=60.01%, 500=37.50%, 750=2.49% 00:39:39.590 cpu : usr=1.80%, sys=3.30%, ctx=3903, majf=0, minf=1 00:39:39.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.590 issued rwts: total=1853,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.590 job2: (groupid=0, jobs=1): err= 0: pid=1808117: Sun Oct 13 01:50:24 2024 00:39:39.590 read: IOPS=968, BW=3873KiB/s (3965kB/s)(3888KiB/1004msec) 00:39:39.590 slat (nsec): min=4767, max=37822, avg=9156.28, stdev=4250.40 00:39:39.590 clat (usec): min=236, max=41242, avg=716.36, stdev=4108.83 00:39:39.590 lat (usec): min=243, max=41256, avg=725.51, stdev=4109.34 00:39:39.590 clat percentiles (usec): 00:39:39.590 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 265], 00:39:39.590 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:39:39.590 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 388], 95.00th=[ 392], 00:39:39.590 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:39:39.590 | 99.99th=[41157] 00:39:39.590 write: IOPS=1019, BW=4080KiB/s (4178kB/s)(4096KiB/1004msec); 0 zone resets 00:39:39.590 slat (nsec): min=6457, max=47330, avg=11520.41, stdev=5594.28 00:39:39.590 clat (usec): min=166, max=476, avg=274.31, stdev=61.74 00:39:39.590 lat (usec): min=175, max=486, avg=285.83, stdev=60.95 00:39:39.590 clat percentiles (usec): 00:39:39.590 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 210], 00:39:39.590 | 30.00th=[ 227], 40.00th=[ 245], 50.00th=[ 277], 60.00th=[ 297], 00:39:39.590 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 371], 00:39:39.590 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 457], 99.95th=[ 478], 00:39:39.590 | 99.99th=[ 478] 00:39:39.590 bw ( KiB/s): min= 8192, max= 8192, per=30.89%, avg=8192.00, stdev= 0.00, samples=1 00:39:39.590 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:39:39.590 lat (usec) : 250=23.70%, 500=75.80% 00:39:39.590 lat (msec) : 50=0.50% 00:39:39.590 cpu : usr=0.70%, sys=2.39%, ctx=1997, majf=0, minf=1 00:39:39.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.590 issued rwts: total=972,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.590 job3: (groupid=0, jobs=1): err= 0: pid=1808118: Sun Oct 13 01:50:24 2024 00:39:39.590 read: IOPS=1051, BW=4207KiB/s (4308kB/s)(4224KiB/1004msec) 00:39:39.590 slat (nsec): min=4500, max=36838, avg=10699.39, stdev=4429.56 00:39:39.590 clat (usec): min=244, max=40999, avg=535.33, stdev=2295.43 00:39:39.590 lat (usec): min=271, max=41014, avg=546.03, stdev=2295.62 00:39:39.590 clat percentiles (usec): 00:39:39.590 | 1.00th=[ 285], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:39:39.590 | 30.00th=[ 334], 40.00th=[ 355], 50.00th=[ 383], 60.00th=[ 396], 00:39:39.590 | 70.00th=[ 437], 80.00th=[ 486], 90.00th=[ 515], 95.00th=[ 529], 00:39:39.590 | 99.00th=[ 586], 99.50th=[ 906], 99.90th=[41157], 99.95th=[41157], 00:39:39.590 | 99.99th=[41157] 00:39:39.590 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:39:39.590 slat (nsec): min=6378, max=41949, avg=10129.31, stdev=5407.02 00:39:39.590 clat (usec): min=179, max=550, avg=263.18, stdev=52.48 00:39:39.590 lat (usec): min=186, max=560, avg=273.31, stdev=52.40 00:39:39.590 clat percentiles (usec): 00:39:39.590 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 223], 00:39:39.590 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 251], 00:39:39.590 | 70.00th=[ 281], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 359], 00:39:39.590 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 494], 99.95th=[ 553], 00:39:39.590 | 99.99th=[ 553] 00:39:39.590 bw ( KiB/s): min= 4096, max= 8192, per=23.17%, avg=6144.00, stdev=2896.31, samples=2 00:39:39.590 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:39:39.590 lat (usec) : 250=35.34%, 500=58.68%, 750=5.63%, 1000=0.15% 00:39:39.590 lat (msec) : 2=0.04%, 50=0.15% 00:39:39.590 cpu : usr=1.69%, sys=2.59%, ctx=2593, majf=0, minf=1 00:39:39.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:39.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:39.590 issued rwts: total=1056,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:39.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:39.590 00:39:39.590 Run status group 0 (all jobs): 00:39:39.590 READ: bw=21.3MiB/s (22.3MB/s), 3873KiB/s-7405KiB/s (3965kB/s-7582kB/s), io=21.4MiB (22.4MB), run=1001-1004msec 00:39:39.590 WRITE: bw=25.9MiB/s (27.2MB/s), 4080KiB/s-8184KiB/s (4178kB/s-8380kB/s), io=26.0MiB (27.3MB), run=1001-1004msec 00:39:39.590 00:39:39.590 Disk stats (read/write): 00:39:39.590 nvme0n1: ios=1433/1536, merge=0/0, ticks=866/305, in_queue=1171, util=98.40% 00:39:39.590 nvme0n2: ios=1578/1716, merge=0/0, ticks=1419/338, in_queue=1757, util=97.97% 00:39:39.590 nvme0n3: ios=1010/1024, merge=0/0, ticks=1286/277, in_queue=1563, util=98.13% 00:39:39.590 nvme0n4: ios=1080/1536, merge=0/0, ticks=1360/405, in_queue=1765, util=98.32% 00:39:39.590 01:50:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:39.590 [global] 00:39:39.590 thread=1 00:39:39.591 invalidate=1 00:39:39.591 rw=write 00:39:39.591 time_based=1 00:39:39.591 runtime=1 00:39:39.591 ioengine=libaio 00:39:39.591 direct=1 00:39:39.591 bs=4096 00:39:39.591 iodepth=128 00:39:39.591 norandommap=0 00:39:39.591 numjobs=1 00:39:39.591 00:39:39.591 verify_dump=1 00:39:39.591 verify_backlog=512 00:39:39.591 verify_state_save=0 00:39:39.591 do_verify=1 00:39:39.591 verify=crc32c-intel 00:39:39.591 [job0] 00:39:39.591 filename=/dev/nvme0n1 00:39:39.591 [job1] 00:39:39.591 filename=/dev/nvme0n2 00:39:39.591 [job2] 00:39:39.591 filename=/dev/nvme0n3 00:39:39.591 [job3] 00:39:39.591 filename=/dev/nvme0n4 00:39:39.591 Could not set queue depth (nvme0n1) 00:39:39.591 Could not set queue depth (nvme0n2) 00:39:39.591 Could not set queue depth (nvme0n3) 00:39:39.591 Could not set queue depth (nvme0n4) 00:39:39.849 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:39.849 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:39.849 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:39.849 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:39.849 fio-3.35 00:39:39.849 Starting 4 threads 00:39:41.224 00:39:41.224 job0: (groupid=0, jobs=1): err= 0: pid=1808455: Sun Oct 13 01:50:26 2024 00:39:41.224 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:39:41.224 slat (usec): min=2, max=23279, avg=236.14, stdev=1711.90 00:39:41.224 clat (usec): min=9161, max=59488, avg=31067.78, stdev=11077.24 00:39:41.224 lat (usec): min=9171, max=59499, avg=31303.93, stdev=11170.54 00:39:41.224 clat percentiles (usec): 00:39:41.224 | 1.00th=[15664], 5.00th=[16909], 10.00th=[18482], 20.00th=[22152], 00:39:41.224 | 30.00th=[23462], 40.00th=[24249], 50.00th=[27919], 60.00th=[32637], 00:39:41.224 | 70.00th=[38011], 80.00th=[41681], 90.00th=[48497], 95.00th=[52167], 00:39:41.224 | 99.00th=[57410], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:39:41.224 | 99.99th=[59507] 00:39:41.224 write: IOPS=2249, BW=8999KiB/s (9215kB/s)(9080KiB/1009msec); 0 zone resets 00:39:41.224 slat (usec): min=3, max=23603, avg=205.82, stdev=1615.64 00:39:41.224 clat (usec): min=5084, max=69709, avg=28278.98, stdev=9191.61 00:39:41.224 lat (usec): min=7116, max=69752, avg=28484.80, stdev=9343.56 00:39:41.224 clat percentiles (usec): 00:39:41.224 | 1.00th=[11338], 5.00th=[17695], 10.00th=[19268], 20.00th=[20841], 00:39:41.224 | 30.00th=[22938], 40.00th=[25560], 50.00th=[26084], 60.00th=[27132], 00:39:41.224 | 70.00th=[27657], 80.00th=[36963], 90.00th=[43254], 95.00th=[46924], 00:39:41.224 | 99.00th=[59507], 99.50th=[59507], 99.90th=[64226], 99.95th=[65799], 00:39:41.224 | 99.99th=[69731] 00:39:41.224 bw ( KiB/s): min= 5560, max=11584, per=13.89%, avg=8572.00, stdev=4259.61, samples=2 00:39:41.224 iops : min= 1390, max= 2896, avg=2143.00, stdev=1064.90, samples=2 00:39:41.224 lat (msec) : 10=0.63%, 20=12.39%, 50=82.52%, 100=4.47% 00:39:41.224 cpu : usr=2.38%, sys=2.88%, ctx=158, majf=0, minf=1 00:39:41.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:39:41.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:41.224 issued rwts: total=2048,2270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:41.224 job1: (groupid=0, jobs=1): err= 0: pid=1808463: Sun Oct 13 01:50:26 2024 00:39:41.224 read: IOPS=2938, BW=11.5MiB/s (12.0MB/s)(11.6MiB/1009msec) 00:39:41.224 slat (usec): min=2, max=22370, avg=167.17, stdev=1264.45 00:39:41.224 clat (usec): min=3845, max=48665, avg=22615.51, stdev=8035.73 00:39:41.224 lat (usec): min=9304, max=48677, avg=22782.68, stdev=8101.47 00:39:41.224 clat percentiles (usec): 00:39:41.224 | 1.00th=[10290], 5.00th=[10945], 10.00th=[14353], 20.00th=[16581], 00:39:41.224 | 30.00th=[17433], 40.00th=[19268], 50.00th=[21365], 60.00th=[23462], 00:39:41.224 | 70.00th=[24511], 80.00th=[26608], 90.00th=[36439], 95.00th=[41157], 00:39:41.224 | 99.00th=[43254], 99.50th=[46400], 99.90th=[48497], 99.95th=[48497], 00:39:41.224 | 99.99th=[48497] 00:39:41.224 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:39:41.224 slat (usec): min=3, max=21199, avg=152.64, stdev=1126.58 00:39:41.224 clat (usec): min=704, max=48683, avg=19803.06, stdev=6001.42 00:39:41.224 lat (usec): min=718, max=48719, avg=19955.70, stdev=6089.77 00:39:41.224 clat percentiles (usec): 00:39:41.224 | 1.00th=[10028], 5.00th=[10945], 10.00th=[11994], 20.00th=[14091], 00:39:41.224 | 30.00th=[16057], 40.00th=[17695], 50.00th=[19268], 60.00th=[21103], 00:39:41.224 | 70.00th=[23987], 80.00th=[26084], 90.00th=[27657], 95.00th=[28967], 00:39:41.224 | 99.00th=[32900], 99.50th=[32900], 99.90th=[45351], 99.95th=[48497], 00:39:41.224 | 99.99th=[48497] 00:39:41.224 bw ( KiB/s): min=10688, max=13888, per=19.91%, avg=12288.00, stdev=2262.74, samples=2 00:39:41.224 iops : min= 2672, max= 3472, avg=3072.00, stdev=565.69, samples=2 00:39:41.224 lat (usec) : 750=0.03% 00:39:41.224 lat (msec) : 4=0.02%, 10=0.65%, 20=47.32%, 50=51.98% 00:39:41.224 cpu : usr=4.07%, sys=5.75%, ctx=182, majf=0, minf=1 00:39:41.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:39:41.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:41.224 issued rwts: total=2965,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:41.224 job2: (groupid=0, jobs=1): err= 0: pid=1808464: Sun Oct 13 01:50:26 2024 00:39:41.224 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:39:41.224 slat (usec): min=3, max=3795, avg=93.92, stdev=447.02 00:39:41.224 clat (usec): min=9330, max=16618, avg=12498.29, stdev=1113.84 00:39:41.224 lat (usec): min=9338, max=16765, avg=12592.22, stdev=1141.90 00:39:41.224 clat percentiles (usec): 00:39:41.224 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[11076], 20.00th=[11600], 00:39:41.224 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:39:41.224 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13960], 95.00th=[14484], 00:39:41.224 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16319], 99.95th=[16450], 00:39:41.224 | 99.99th=[16581] 00:39:41.225 write: IOPS=5181, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1004msec); 0 zone resets 00:39:41.225 slat (usec): min=4, max=3981, avg=87.05, stdev=382.05 00:39:41.225 clat (usec): min=602, max=25159, avg=12195.15, stdev=2283.51 00:39:41.225 lat (usec): min=654, max=25167, avg=12282.20, stdev=2288.48 00:39:41.225 clat percentiles (usec): 00:39:41.225 | 1.00th=[ 3195], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[11731], 00:39:41.225 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:39:41.225 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13698], 95.00th=[14615], 00:39:41.225 | 99.00th=[20055], 99.50th=[21890], 99.90th=[25035], 99.95th=[25035], 00:39:41.225 | 99.99th=[25035] 00:39:41.225 bw ( KiB/s): min=20208, max=20752, per=33.18%, avg=20480.00, stdev=384.67, samples=2 00:39:41.225 iops : min= 5052, max= 5188, avg=5120.00, stdev=96.17, samples=2 00:39:41.225 lat (usec) : 750=0.04%, 1000=0.04% 00:39:41.225 lat (msec) : 2=0.16%, 4=0.42%, 10=5.67%, 20=93.20%, 50=0.48% 00:39:41.225 cpu : usr=7.98%, sys=13.16%, ctx=581, majf=0, minf=1 00:39:41.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:41.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:41.225 issued rwts: total=5120,5202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:41.225 job3: (groupid=0, jobs=1): err= 0: pid=1808465: Sun Oct 13 01:50:26 2024 00:39:41.225 read: IOPS=4624, BW=18.1MiB/s (18.9MB/s)(18.3MiB/1015msec) 00:39:41.225 slat (usec): min=2, max=11651, avg=98.43, stdev=703.88 00:39:41.225 clat (usec): min=4485, max=26340, avg=13484.27, stdev=3249.52 00:39:41.225 lat (usec): min=4496, max=26345, avg=13582.70, stdev=3273.86 00:39:41.225 clat percentiles (usec): 00:39:41.225 | 1.00th=[ 6194], 5.00th=[ 8455], 10.00th=[10159], 20.00th=[11076], 00:39:41.225 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13042], 60.00th=[13304], 00:39:41.225 | 70.00th=[13960], 80.00th=[15664], 90.00th=[18482], 95.00th=[19792], 00:39:41.225 | 99.00th=[22676], 99.50th=[23462], 99.90th=[24773], 99.95th=[24773], 00:39:41.225 | 99.99th=[26346] 00:39:41.225 write: IOPS=5044, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1015msec); 0 zone resets 00:39:41.225 slat (usec): min=4, max=10883, avg=94.08, stdev=666.44 00:39:41.225 clat (usec): min=1359, max=28111, avg=12797.24, stdev=3068.27 00:39:41.225 lat (usec): min=1374, max=28121, avg=12891.32, stdev=3103.40 00:39:41.225 clat percentiles (usec): 00:39:41.225 | 1.00th=[ 6194], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[10290], 00:39:41.225 | 30.00th=[11994], 40.00th=[12649], 50.00th=[12911], 60.00th=[13304], 00:39:41.225 | 70.00th=[13960], 80.00th=[14091], 90.00th=[16909], 95.00th=[17695], 00:39:41.225 | 99.00th=[21890], 99.50th=[25297], 99.90th=[27919], 99.95th=[28181], 00:39:41.225 | 99.99th=[28181] 00:39:41.225 bw ( KiB/s): min=20152, max=20480, per=32.91%, avg=20316.00, stdev=231.93, samples=2 00:39:41.225 iops : min= 5038, max= 5120, avg=5079.00, stdev=57.98, samples=2 00:39:41.225 lat (msec) : 2=0.03%, 4=0.06%, 10=13.51%, 20=83.98%, 50=2.41% 00:39:41.225 cpu : usr=7.10%, sys=11.54%, ctx=333, majf=0, minf=1 00:39:41.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:41.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:41.225 issued rwts: total=4694,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:41.225 00:39:41.225 Run status group 0 (all jobs): 00:39:41.225 READ: bw=57.1MiB/s (59.8MB/s), 8119KiB/s-19.9MiB/s (8314kB/s-20.9MB/s), io=57.9MiB (60.7MB), run=1004-1015msec 00:39:41.225 WRITE: bw=60.3MiB/s (63.2MB/s), 8999KiB/s-20.2MiB/s (9215kB/s-21.2MB/s), io=61.2MiB (64.2MB), run=1004-1015msec 00:39:41.225 00:39:41.225 Disk stats (read/write): 00:39:41.225 nvme0n1: ios=1585/2048, merge=0/0, ticks=35302/40164, in_queue=75466, util=97.19% 00:39:41.225 nvme0n2: ios=2408/2560, merge=0/0, ticks=48355/47099, in_queue=95454, util=97.86% 00:39:41.225 nvme0n3: ios=4126/4608, merge=0/0, ticks=16241/19425, in_queue=35666, util=89.02% 00:39:41.225 nvme0n4: ios=4096/4191, merge=0/0, ticks=51734/50212, in_queue=101946, util=89.67% 00:39:41.225 01:50:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:41.225 [global] 00:39:41.225 thread=1 00:39:41.225 invalidate=1 00:39:41.225 rw=randwrite 00:39:41.225 time_based=1 00:39:41.225 runtime=1 00:39:41.225 ioengine=libaio 00:39:41.225 direct=1 00:39:41.225 bs=4096 00:39:41.225 iodepth=128 00:39:41.225 norandommap=0 00:39:41.225 numjobs=1 00:39:41.225 00:39:41.225 verify_dump=1 00:39:41.225 verify_backlog=512 00:39:41.225 verify_state_save=0 00:39:41.225 do_verify=1 00:39:41.225 verify=crc32c-intel 00:39:41.225 [job0] 00:39:41.225 filename=/dev/nvme0n1 00:39:41.225 [job1] 00:39:41.225 filename=/dev/nvme0n2 00:39:41.225 [job2] 00:39:41.225 filename=/dev/nvme0n3 00:39:41.225 [job3] 00:39:41.225 filename=/dev/nvme0n4 00:39:41.225 Could not set queue depth (nvme0n1) 00:39:41.225 Could not set queue depth (nvme0n2) 00:39:41.225 Could not set queue depth (nvme0n3) 00:39:41.225 Could not set queue depth (nvme0n4) 00:39:41.225 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.225 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.225 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.225 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:41.225 fio-3.35 00:39:41.225 Starting 4 threads 00:39:42.601 00:39:42.602 job0: (groupid=0, jobs=1): err= 0: pid=1808689: Sun Oct 13 01:50:27 2024 00:39:42.602 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:39:42.602 slat (usec): min=3, max=5081, avg=83.72, stdev=462.69 00:39:42.602 clat (usec): min=7112, max=19735, avg=11021.13, stdev=1638.06 00:39:42.602 lat (usec): min=7121, max=19740, avg=11104.85, stdev=1661.61 00:39:42.602 clat percentiles (usec): 00:39:42.602 | 1.00th=[ 7767], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9503], 00:39:42.602 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:39:42.602 | 70.00th=[11600], 80.00th=[12518], 90.00th=[13435], 95.00th=[13698], 00:39:42.602 | 99.00th=[15139], 99.50th=[15664], 99.90th=[17695], 99.95th=[17957], 00:39:42.602 | 99.99th=[19792] 00:39:42.602 write: IOPS=5841, BW=22.8MiB/s (23.9MB/s)(22.9MiB/1005msec); 0 zone resets 00:39:42.602 slat (usec): min=4, max=5292, avg=82.52, stdev=466.49 00:39:42.602 clat (usec): min=4709, max=16273, avg=11080.09, stdev=1255.44 00:39:42.602 lat (usec): min=5373, max=16481, avg=11162.62, stdev=1302.66 00:39:42.602 clat percentiles (usec): 00:39:42.602 | 1.00th=[ 6718], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10552], 00:39:42.602 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:39:42.602 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11863], 95.00th=[13042], 00:39:42.602 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16188], 99.95th=[16188], 00:39:42.602 | 99.99th=[16319] 00:39:42.602 bw ( KiB/s): min=21456, max=24496, per=34.89%, avg=22976.00, stdev=2149.60, samples=2 00:39:42.602 iops : min= 5364, max= 6124, avg=5744.00, stdev=537.40, samples=2 00:39:42.602 lat (msec) : 10=17.82%, 20=82.18% 00:39:42.602 cpu : usr=5.48%, sys=8.76%, ctx=535, majf=0, minf=2 00:39:42.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:42.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.602 issued rwts: total=5632,5871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.602 job1: (groupid=0, jobs=1): err= 0: pid=1808690: Sun Oct 13 01:50:27 2024 00:39:42.602 read: IOPS=2411, BW=9647KiB/s (9878kB/s)(9724KiB/1008msec) 00:39:42.602 slat (usec): min=2, max=22492, avg=205.19, stdev=1558.34 00:39:42.602 clat (msec): min=5, max=153, avg=24.71, stdev=21.80 00:39:42.602 lat (msec): min=7, max=153, avg=24.92, stdev=21.98 00:39:42.602 clat percentiles (msec): 00:39:42.602 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 12], 00:39:42.602 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 21], 00:39:42.602 | 70.00th=[ 25], 80.00th=[ 34], 90.00th=[ 51], 95.00th=[ 66], 00:39:42.602 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 155], 00:39:42.602 | 99.99th=[ 155] 00:39:42.602 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:39:42.602 slat (usec): min=4, max=31973, avg=186.52, stdev=1492.85 00:39:42.602 clat (msec): min=5, max=162, avg=26.38, stdev=30.67 00:39:42.602 lat (msec): min=5, max=162, avg=26.57, stdev=30.86 00:39:42.602 clat percentiles (msec): 00:39:42.602 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:39:42.602 | 30.00th=[ 13], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 22], 00:39:42.602 | 70.00th=[ 24], 80.00th=[ 26], 90.00th=[ 37], 95.00th=[ 123], 00:39:42.602 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 163], 99.95th=[ 163], 00:39:42.602 | 99.99th=[ 163] 00:39:42.602 bw ( KiB/s): min= 5464, max=15016, per=15.55%, avg=10240.00, stdev=6754.28, samples=2 00:39:42.602 iops : min= 1366, max= 3754, avg=2560.00, stdev=1688.57, samples=2 00:39:42.602 lat (msec) : 10=14.95%, 20=41.68%, 50=34.58%, 100=4.65%, 250=4.15% 00:39:42.602 cpu : usr=2.78%, sys=2.68%, ctx=157, majf=0, minf=1 00:39:42.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:39:42.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.602 issued rwts: total=2431,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.602 job2: (groupid=0, jobs=1): err= 0: pid=1808691: Sun Oct 13 01:50:27 2024 00:39:42.602 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:39:42.602 slat (usec): min=3, max=21583, avg=151.50, stdev=1271.56 00:39:42.602 clat (usec): min=9074, max=48301, avg=20241.61, stdev=8137.69 00:39:42.602 lat (usec): min=9079, max=48338, avg=20393.11, stdev=8203.06 00:39:42.602 clat percentiles (usec): 00:39:42.602 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10552], 20.00th=[12125], 00:39:42.602 | 30.00th=[14091], 40.00th=[16319], 50.00th=[19268], 60.00th=[22152], 00:39:42.602 | 70.00th=[26084], 80.00th=[26870], 90.00th=[30802], 95.00th=[35390], 00:39:42.602 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:39:42.602 | 99.99th=[48497] 00:39:42.602 write: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1007msec); 0 zone resets 00:39:42.602 slat (usec): min=4, max=20714, avg=194.35, stdev=1341.86 00:39:42.602 clat (msec): min=6, max=132, avg=24.96, stdev=24.98 00:39:42.602 lat (msec): min=6, max=132, avg=25.16, stdev=25.16 00:39:42.602 clat percentiles (msec): 00:39:42.602 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 13], 00:39:42.602 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 21], 00:39:42.602 | 70.00th=[ 23], 80.00th=[ 25], 90.00th=[ 37], 95.00th=[ 93], 00:39:42.602 | 99.00th=[ 128], 99.50th=[ 129], 99.90th=[ 133], 99.95th=[ 133], 00:39:42.602 | 99.99th=[ 133] 00:39:42.602 bw ( KiB/s): min= 8192, max=15128, per=17.71%, avg=11660.00, stdev=4904.49, samples=2 00:39:42.602 iops : min= 2048, max= 3782, avg=2915.00, stdev=1226.12, samples=2 00:39:42.602 lat (msec) : 10=6.05%, 20=49.80%, 50=39.45%, 100=2.14%, 250=2.55% 00:39:42.602 cpu : usr=2.68%, sys=4.08%, ctx=159, majf=0, minf=2 00:39:42.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:39:42.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.602 issued rwts: total=2560,3042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.602 job3: (groupid=0, jobs=1): err= 0: pid=1808692: Sun Oct 13 01:50:27 2024 00:39:42.602 read: IOPS=4654, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1003msec) 00:39:42.602 slat (usec): min=3, max=6264, avg=102.24, stdev=530.18 00:39:42.602 clat (usec): min=1697, max=20380, avg=12812.40, stdev=1776.20 00:39:42.602 lat (usec): min=6864, max=20386, avg=12914.64, stdev=1819.44 00:39:42.602 clat percentiles (usec): 00:39:42.602 | 1.00th=[ 7767], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11994], 00:39:42.602 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:39:42.602 | 70.00th=[13173], 80.00th=[13566], 90.00th=[15139], 95.00th=[16188], 00:39:42.602 | 99.00th=[17695], 99.50th=[18220], 99.90th=[20317], 99.95th=[20317], 00:39:42.602 | 99.99th=[20317] 00:39:42.602 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:39:42.602 slat (usec): min=4, max=6131, avg=94.50, stdev=447.77 00:39:42.602 clat (usec): min=6973, max=20390, avg=13065.24, stdev=1468.27 00:39:42.602 lat (usec): min=6983, max=21189, avg=13159.73, stdev=1502.07 00:39:42.602 clat percentiles (usec): 00:39:42.602 | 1.00th=[ 8160], 5.00th=[11207], 10.00th=[11994], 20.00th=[12387], 00:39:42.602 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:39:42.602 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13829], 95.00th=[16188], 00:39:42.602 | 99.00th=[18482], 99.50th=[18744], 99.90th=[20317], 99.95th=[20317], 00:39:42.602 | 99.99th=[20317] 00:39:42.602 bw ( KiB/s): min=19944, max=20480, per=30.70%, avg=20212.00, stdev=379.01, samples=2 00:39:42.602 iops : min= 4986, max= 5120, avg=5053.00, stdev=94.75, samples=2 00:39:42.602 lat (msec) : 2=0.01%, 10=4.52%, 20=95.32%, 50=0.15% 00:39:42.602 cpu : usr=5.49%, sys=7.68%, ctx=579, majf=0, minf=1 00:39:42.602 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:42.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:42.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:42.602 issued rwts: total=4668,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:42.602 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:42.602 00:39:42.602 Run status group 0 (all jobs): 00:39:42.602 READ: bw=59.3MiB/s (62.1MB/s), 9647KiB/s-21.9MiB/s (9878kB/s-23.0MB/s), io=59.7MiB (62.6MB), run=1003-1008msec 00:39:42.603 WRITE: bw=64.3MiB/s (67.4MB/s), 9.92MiB/s-22.8MiB/s (10.4MB/s-23.9MB/s), io=64.8MiB (68.0MB), run=1003-1008msec 00:39:42.603 00:39:42.603 Disk stats (read/write): 00:39:42.603 nvme0n1: ios=4631/5119, merge=0/0, ticks=25911/26678, in_queue=52589, util=98.40% 00:39:42.603 nvme0n2: ios=2089/2256, merge=0/0, ticks=45589/59620, in_queue=105209, util=98.98% 00:39:42.603 nvme0n3: ios=2048/2490, merge=0/0, ticks=38483/64696, in_queue=103179, util=89.06% 00:39:42.603 nvme0n4: ios=4140/4156, merge=0/0, ticks=26707/25769, in_queue=52476, util=98.95% 00:39:42.603 01:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:42.603 01:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1808832 00:39:42.603 01:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:42.603 01:50:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:42.603 [global] 00:39:42.603 thread=1 00:39:42.603 invalidate=1 00:39:42.603 rw=read 00:39:42.603 time_based=1 00:39:42.603 runtime=10 00:39:42.603 ioengine=libaio 00:39:42.603 direct=1 00:39:42.603 bs=4096 00:39:42.603 iodepth=1 00:39:42.603 norandommap=1 00:39:42.603 numjobs=1 00:39:42.603 00:39:42.603 [job0] 00:39:42.603 filename=/dev/nvme0n1 00:39:42.603 [job1] 00:39:42.603 filename=/dev/nvme0n2 00:39:42.603 [job2] 00:39:42.603 filename=/dev/nvme0n3 00:39:42.603 [job3] 00:39:42.603 filename=/dev/nvme0n4 00:39:42.603 Could not set queue depth (nvme0n1) 00:39:42.603 Could not set queue depth (nvme0n2) 00:39:42.603 Could not set queue depth (nvme0n3) 00:39:42.603 Could not set queue depth (nvme0n4) 00:39:42.603 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:42.603 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:42.603 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:42.603 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:42.603 fio-3.35 00:39:42.603 Starting 4 threads 00:39:45.882 01:50:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:45.882 01:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:45.882 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=299008, buflen=4096 00:39:45.882 fio: pid=1808927, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:45.882 01:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:45.882 01:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:46.140 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=843776, buflen=4096 00:39:46.140 fio: pid=1808926, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:46.398 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11845632, buflen=4096 00:39:46.398 fio: pid=1808922, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:46.398 01:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:46.398 01:50:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:46.657 01:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:46.657 01:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:46.657 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1024000, buflen=4096 00:39:46.657 fio: pid=1808923, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:46.657 00:39:46.657 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1808922: Sun Oct 13 01:50:32 2024 00:39:46.657 read: IOPS=824, BW=3299KiB/s (3378kB/s)(11.3MiB/3507msec) 00:39:46.657 slat (usec): min=4, max=21852, avg=23.19, stdev=494.05 00:39:46.657 clat (usec): min=199, max=42310, avg=1179.88, stdev=6192.64 00:39:46.657 lat (usec): min=204, max=42342, avg=1203.08, stdev=6212.17 00:39:46.657 clat percentiles (usec): 00:39:46.657 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 212], 00:39:46.657 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 223], 00:39:46.657 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 243], 95.00th=[ 306], 00:39:46.657 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:46.657 | 99.99th=[42206] 00:39:46.657 bw ( KiB/s): min= 96, max= 2432, per=31.53%, avg=1130.67, stdev=1147.94, samples=6 00:39:46.657 iops : min= 24, max= 608, avg=282.67, stdev=286.99, samples=6 00:39:46.657 lat (usec) : 250=92.02%, 500=5.53%, 750=0.07% 00:39:46.657 lat (msec) : 2=0.03%, 50=2.32% 00:39:46.657 cpu : usr=0.11%, sys=0.74%, ctx=2897, majf=0, minf=1 00:39:46.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:46.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.657 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.657 issued rwts: total=2893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:46.657 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1808923: Sun Oct 13 01:50:32 2024 00:39:46.657 read: IOPS=65, BW=262KiB/s (268kB/s)(1000KiB/3818msec) 00:39:46.657 slat (usec): min=5, max=5943, avg=64.33, stdev=528.83 00:39:46.657 clat (usec): min=210, max=68122, avg=15108.58, stdev=19835.79 00:39:46.657 lat (usec): min=225, max=68134, avg=15169.88, stdev=19911.20 00:39:46.657 clat percentiles (usec): 00:39:46.657 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 251], 00:39:46.657 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 306], 60.00th=[ 375], 00:39:46.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:46.657 | 99.00th=[42206], 99.50th=[44827], 99.90th=[67634], 99.95th=[67634], 00:39:46.657 | 99.99th=[67634] 00:39:46.657 bw ( KiB/s): min= 96, max= 1024, per=7.73%, avg=277.14, stdev=350.99, samples=7 00:39:46.657 iops : min= 24, max= 256, avg=69.29, stdev=87.75, samples=7 00:39:46.657 lat (usec) : 250=19.52%, 500=43.82% 00:39:46.657 lat (msec) : 10=0.40%, 50=35.46%, 100=0.40% 00:39:46.657 cpu : usr=0.00%, sys=0.18%, ctx=258, majf=0, minf=2 00:39:46.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:46.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.657 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.657 issued rwts: total=251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:46.657 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1808926: Sun Oct 13 01:50:32 2024 00:39:46.657 read: IOPS=63, BW=254KiB/s (260kB/s)(824KiB/3244msec) 00:39:46.657 slat (usec): min=5, max=12896, avg=75.29, stdev=895.47 00:39:46.657 clat (usec): min=228, max=41475, avg=15558.44, stdev=19677.91 00:39:46.657 lat (usec): min=235, max=53941, avg=15634.02, stdev=19785.70 00:39:46.657 clat percentiles (usec): 00:39:46.657 | 1.00th=[ 235], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 281], 00:39:46.657 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 338], 60.00th=[ 437], 00:39:46.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:46.657 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:39:46.657 | 99.99th=[41681] 00:39:46.657 bw ( KiB/s): min= 96, max= 1048, per=7.42%, avg=266.67, stdev=382.85, samples=6 00:39:46.657 iops : min= 24, max= 262, avg=66.67, stdev=95.71, samples=6 00:39:46.657 lat (usec) : 250=2.90%, 500=58.94% 00:39:46.657 lat (msec) : 20=0.48%, 50=37.20% 00:39:46.657 cpu : usr=0.15%, sys=0.00%, ctx=209, majf=0, minf=2 00:39:46.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:46.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.657 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.657 issued rwts: total=207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:46.657 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1808927: Sun Oct 13 01:50:32 2024 00:39:46.657 read: IOPS=25, BW=99.5KiB/s (102kB/s)(292KiB/2936msec) 00:39:46.657 slat (nsec): min=9329, max=45519, avg=20266.49, stdev=9076.42 00:39:46.657 clat (usec): min=390, max=41628, avg=39870.45, stdev=6666.91 00:39:46.657 lat (usec): min=408, max=41638, avg=39890.82, stdev=6667.30 00:39:46.657 clat percentiles (usec): 00:39:46.657 | 1.00th=[ 392], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:46.657 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:46.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:46.657 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:46.657 | 99.99th=[41681] 00:39:46.657 bw ( KiB/s): min= 96, max= 112, per=2.76%, avg=99.20, stdev= 7.16, samples=5 00:39:46.657 iops : min= 24, max= 28, avg=24.80, stdev= 1.79, samples=5 00:39:46.657 lat (usec) : 500=2.70% 00:39:46.657 lat (msec) : 50=95.95% 00:39:46.657 cpu : usr=0.07%, sys=0.00%, ctx=75, majf=0, minf=1 00:39:46.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:46.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.657 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.657 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:46.657 00:39:46.657 Run status group 0 (all jobs): 00:39:46.657 READ: bw=3584KiB/s (3670kB/s), 99.5KiB/s-3299KiB/s (102kB/s-3378kB/s), io=13.4MiB (14.0MB), run=2936-3818msec 00:39:46.657 00:39:46.657 Disk stats (read/write): 00:39:46.657 nvme0n1: ios=2412/0, merge=0/0, ticks=3339/0, in_queue=3339, util=95.62% 00:39:46.657 nvme0n2: ios=285/0, merge=0/0, ticks=3755/0, in_queue=3755, util=99.06% 00:39:46.657 nvme0n3: ios=249/0, merge=0/0, ticks=4254/0, in_queue=4254, util=99.10% 00:39:46.657 nvme0n4: ios=71/0, merge=0/0, ticks=2831/0, in_queue=2831, util=96.75% 00:39:46.915 01:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:46.915 01:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:47.173 01:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:47.173 01:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:47.432 01:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:47.432 01:50:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:47.689 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:47.689 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:47.947 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:47.947 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1808832 00:39:47.947 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:47.947 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:48.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:48.205 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:48.205 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:39:48.205 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:39:48.205 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:48.205 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:39:48.206 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:48.206 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:39:48.206 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:48.206 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:48.206 nvmf hotplug test: fio failed as expected 00:39:48.206 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:48.464 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:48.464 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:48.464 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:48.464 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:48.465 rmmod nvme_tcp 00:39:48.465 rmmod nvme_fabrics 00:39:48.465 rmmod nvme_keyring 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1806822 ']' 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1806822 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1806822 ']' 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1806822 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1806822 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1806822' 00:39:48.465 killing process with pid 1806822 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1806822 00:39:48.465 01:50:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1806822 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:48.723 01:50:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:51.253 00:39:51.253 real 0m23.652s 00:39:51.253 user 1m7.500s 00:39:51.253 sys 0m9.608s 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:51.253 ************************************ 00:39:51.253 END TEST nvmf_fio_target 00:39:51.253 ************************************ 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:51.253 ************************************ 00:39:51.253 START TEST nvmf_bdevio 00:39:51.253 ************************************ 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:51.253 * Looking for test storage... 00:39:51.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:51.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.253 --rc genhtml_branch_coverage=1 00:39:51.253 --rc genhtml_function_coverage=1 00:39:51.253 --rc genhtml_legend=1 00:39:51.253 --rc geninfo_all_blocks=1 00:39:51.253 --rc geninfo_unexecuted_blocks=1 00:39:51.253 00:39:51.253 ' 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:51.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.253 --rc genhtml_branch_coverage=1 00:39:51.253 --rc genhtml_function_coverage=1 00:39:51.253 --rc genhtml_legend=1 00:39:51.253 --rc geninfo_all_blocks=1 00:39:51.253 --rc geninfo_unexecuted_blocks=1 00:39:51.253 00:39:51.253 ' 00:39:51.253 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:51.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.254 --rc genhtml_branch_coverage=1 00:39:51.254 --rc genhtml_function_coverage=1 00:39:51.254 --rc genhtml_legend=1 00:39:51.254 --rc geninfo_all_blocks=1 00:39:51.254 --rc geninfo_unexecuted_blocks=1 00:39:51.254 00:39:51.254 ' 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:51.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:51.254 --rc genhtml_branch_coverage=1 00:39:51.254 --rc genhtml_function_coverage=1 00:39:51.254 --rc genhtml_legend=1 00:39:51.254 --rc geninfo_all_blocks=1 00:39:51.254 --rc geninfo_unexecuted_blocks=1 00:39:51.254 00:39:51.254 ' 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:51.254 01:50:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:53.154 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:53.154 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:53.154 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:53.154 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:53.154 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:53.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:53.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:39:53.155 00:39:53.155 --- 10.0.0.2 ping statistics --- 00:39:53.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.155 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:53.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:53.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:39:53.155 00:39:53.155 --- 10.0.0.1 ping statistics --- 00:39:53.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:53.155 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1811547 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1811547 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1811547 ']' 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:53.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:53.155 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.155 [2024-10-13 01:50:38.574309] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:53.155 [2024-10-13 01:50:38.575368] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:39:53.155 [2024-10-13 01:50:38.575431] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:53.155 [2024-10-13 01:50:38.641797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:53.155 [2024-10-13 01:50:38.691286] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:53.155 [2024-10-13 01:50:38.691350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:53.155 [2024-10-13 01:50:38.691366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:53.155 [2024-10-13 01:50:38.691379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:53.155 [2024-10-13 01:50:38.691390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:53.155 [2024-10-13 01:50:38.693090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:53.155 [2024-10-13 01:50:38.693178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:53.155 [2024-10-13 01:50:38.693280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:53.155 [2024-10-13 01:50:38.693288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:53.414 [2024-10-13 01:50:38.782402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:53.414 [2024-10-13 01:50:38.782642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:53.414 [2024-10-13 01:50:38.782912] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:53.414 [2024-10-13 01:50:38.783479] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:53.414 [2024-10-13 01:50:38.783734] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.414 [2024-10-13 01:50:38.834046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.414 Malloc0 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.414 [2024-10-13 01:50:38.898260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:39:53.414 { 00:39:53.414 "params": { 00:39:53.414 "name": "Nvme$subsystem", 00:39:53.414 "trtype": "$TEST_TRANSPORT", 00:39:53.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:53.414 "adrfam": "ipv4", 00:39:53.414 "trsvcid": "$NVMF_PORT", 00:39:53.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:53.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:53.414 "hdgst": ${hdgst:-false}, 00:39:53.414 "ddgst": ${ddgst:-false} 00:39:53.414 }, 00:39:53.414 "method": "bdev_nvme_attach_controller" 00:39:53.414 } 00:39:53.414 EOF 00:39:53.414 )") 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:39:53.414 01:50:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:39:53.414 "params": { 00:39:53.414 "name": "Nvme1", 00:39:53.414 "trtype": "tcp", 00:39:53.414 "traddr": "10.0.0.2", 00:39:53.414 "adrfam": "ipv4", 00:39:53.414 "trsvcid": "4420", 00:39:53.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:53.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:53.414 "hdgst": false, 00:39:53.414 "ddgst": false 00:39:53.414 }, 00:39:53.414 "method": "bdev_nvme_attach_controller" 00:39:53.414 }' 00:39:53.414 [2024-10-13 01:50:38.943821] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:39:53.414 [2024-10-13 01:50:38.943914] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1811668 ] 00:39:53.671 [2024-10-13 01:50:39.005190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:53.671 [2024-10-13 01:50:39.054998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:53.671 [2024-10-13 01:50:39.055048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:53.671 [2024-10-13 01:50:39.055051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.671 I/O targets: 00:39:53.671 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:53.671 00:39:53.671 00:39:53.671 CUnit - A unit testing framework for C - Version 2.1-3 00:39:53.671 http://cunit.sourceforge.net/ 00:39:53.671 00:39:53.671 00:39:53.671 Suite: bdevio tests on: Nvme1n1 00:39:53.928 Test: blockdev write read block ...passed 00:39:53.928 Test: blockdev write zeroes read block ...passed 00:39:53.928 Test: blockdev write zeroes read no split ...passed 00:39:53.928 Test: blockdev write zeroes read split ...passed 00:39:53.928 Test: blockdev write zeroes read split partial ...passed 00:39:53.928 Test: blockdev reset ...[2024-10-13 01:50:39.330147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:39:53.928 [2024-10-13 01:50:39.330251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4bc30 (9): Bad file descriptor 00:39:53.928 [2024-10-13 01:50:39.335239] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:53.928 passed 00:39:53.928 Test: blockdev write read 8 blocks ...passed 00:39:53.928 Test: blockdev write read size > 128k ...passed 00:39:53.928 Test: blockdev write read invalid size ...passed 00:39:53.928 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:53.928 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:53.928 Test: blockdev write read max offset ...passed 00:39:53.928 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:53.928 Test: blockdev writev readv 8 blocks ...passed 00:39:54.186 Test: blockdev writev readv 30 x 1block ...passed 00:39:54.186 Test: blockdev writev readv block ...passed 00:39:54.186 Test: blockdev writev readv size > 128k ...passed 00:39:54.186 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:54.186 Test: blockdev comparev and writev ...[2024-10-13 01:50:39.630699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.186 [2024-10-13 01:50:39.630735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.630759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.186 [2024-10-13 01:50:39.630776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.631185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.186 [2024-10-13 01:50:39.631209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.631230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.186 [2024-10-13 01:50:39.631246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.631639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.186 [2024-10-13 01:50:39.631663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.631683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.186 [2024-10-13 01:50:39.631699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.632138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.186 [2024-10-13 01:50:39.632161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.632181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.186 [2024-10-13 01:50:39.632196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:54.186 passed 00:39:54.186 Test: blockdev nvme passthru rw ...passed 00:39:54.186 Test: blockdev nvme passthru vendor specific ...[2024-10-13 01:50:39.715757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:54.186 [2024-10-13 01:50:39.715784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.715930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:54.186 [2024-10-13 01:50:39.715953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.716099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:54.186 [2024-10-13 01:50:39.716121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:54.186 [2024-10-13 01:50:39.716266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:54.186 [2024-10-13 01:50:39.716289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:54.186 passed 00:39:54.186 Test: blockdev nvme admin passthru ...passed 00:39:54.444 Test: blockdev copy ...passed 00:39:54.444 00:39:54.444 Run Summary: Type Total Ran Passed Failed Inactive 00:39:54.444 suites 1 1 n/a 0 0 00:39:54.444 tests 23 23 23 0 0 00:39:54.444 asserts 152 152 152 0 n/a 00:39:54.444 00:39:54.444 Elapsed time = 1.098 seconds 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:54.444 01:50:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:54.444 rmmod nvme_tcp 00:39:54.444 rmmod nvme_fabrics 00:39:54.444 rmmod nvme_keyring 00:39:54.444 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:54.444 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:39:54.444 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:39:54.444 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1811547 ']' 00:39:54.444 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1811547 00:39:54.444 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1811547 ']' 00:39:54.444 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1811547 00:39:54.444 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:39:54.702 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:54.702 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1811547 00:39:54.702 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:39:54.702 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:39:54.702 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1811547' 00:39:54.702 killing process with pid 1811547 00:39:54.702 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1811547 00:39:54.702 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1811547 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:54.960 01:50:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:56.862 01:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:56.862 00:39:56.862 real 0m6.084s 00:39:56.862 user 0m7.564s 00:39:56.862 sys 0m2.335s 00:39:56.862 01:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:56.862 01:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:56.862 ************************************ 00:39:56.862 END TEST nvmf_bdevio 00:39:56.862 ************************************ 00:39:56.862 01:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:56.862 00:39:56.862 real 3m53.480s 00:39:56.862 user 8m50.041s 00:39:56.862 sys 1m24.127s 00:39:56.862 01:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:56.862 01:50:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:56.862 ************************************ 00:39:56.862 END TEST nvmf_target_core_interrupt_mode 00:39:56.862 ************************************ 00:39:56.862 01:50:42 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:56.862 01:50:42 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:39:56.862 01:50:42 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:56.862 01:50:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:56.862 ************************************ 00:39:56.862 START TEST nvmf_interrupt 00:39:56.862 ************************************ 00:39:56.862 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:57.121 * Looking for test storage... 00:39:57.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:39:57.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.121 --rc genhtml_branch_coverage=1 00:39:57.121 --rc genhtml_function_coverage=1 00:39:57.121 --rc genhtml_legend=1 00:39:57.121 --rc geninfo_all_blocks=1 00:39:57.121 --rc geninfo_unexecuted_blocks=1 00:39:57.121 00:39:57.121 ' 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:39:57.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.121 --rc genhtml_branch_coverage=1 00:39:57.121 --rc genhtml_function_coverage=1 00:39:57.121 --rc genhtml_legend=1 00:39:57.121 --rc geninfo_all_blocks=1 00:39:57.121 --rc geninfo_unexecuted_blocks=1 00:39:57.121 00:39:57.121 ' 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:39:57.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.121 --rc genhtml_branch_coverage=1 00:39:57.121 --rc genhtml_function_coverage=1 00:39:57.121 --rc genhtml_legend=1 00:39:57.121 --rc geninfo_all_blocks=1 00:39:57.121 --rc geninfo_unexecuted_blocks=1 00:39:57.121 00:39:57.121 ' 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:39:57.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.121 --rc genhtml_branch_coverage=1 00:39:57.121 --rc genhtml_function_coverage=1 00:39:57.121 --rc genhtml_legend=1 00:39:57.121 --rc geninfo_all_blocks=1 00:39:57.121 --rc geninfo_unexecuted_blocks=1 00:39:57.121 00:39:57.121 ' 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:57.121 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:39:57.122 01:50:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:59.024 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:59.024 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:59.024 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:59.024 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:59.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:59.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:39:59.024 00:39:59.024 --- 10.0.0.2 ping statistics --- 00:39:59.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.024 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:39:59.024 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:59.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:59.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:39:59.024 00:39:59.024 --- 10.0.0.1 ping statistics --- 00:39:59.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:59.025 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:59.025 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.283 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=1813719 00:39:59.283 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:59.283 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 1813719 00:39:59.283 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1813719 ']' 00:39:59.283 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:59.283 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:59.283 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:59.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:59.283 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:59.283 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.283 [2024-10-13 01:50:44.656051] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:59.283 [2024-10-13 01:50:44.657244] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:39:59.283 [2024-10-13 01:50:44.657297] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:59.283 [2024-10-13 01:50:44.727742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:59.283 [2024-10-13 01:50:44.778612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:59.283 [2024-10-13 01:50:44.778672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:59.283 [2024-10-13 01:50:44.778700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:59.283 [2024-10-13 01:50:44.778711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:59.283 [2024-10-13 01:50:44.778721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:59.283 [2024-10-13 01:50:44.782497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.283 [2024-10-13 01:50:44.782509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.542 [2024-10-13 01:50:44.880799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:59.542 [2024-10-13 01:50:44.880806] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:59.542 [2024-10-13 01:50:44.881118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:39:59.542 5000+0 records in 00:39:59.542 5000+0 records out 00:39:59.542 10240000 bytes (10 MB, 9.8 MiB) copied, 0.013869 s, 738 MB/s 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.542 01:50:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.542 AIO0 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.542 [2024-10-13 01:50:45.027234] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:59.542 [2024-10-13 01:50:45.051513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1813719 0 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1813719 0 idle 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1813719 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1813719 -w 256 00:39:59.542 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1813719 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.27 reactor_0' 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1813719 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.27 reactor_0 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1813719 1 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1813719 1 idle 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1813719 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:39:59.800 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1813719 -w 256 00:39:59.801 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1813774 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1813774 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1813822 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1813719 0 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1813719 0 busy 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1813719 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1813719 -w 256 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1813719 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:00.49 reactor_0' 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1813719 root 20 0 128.2g 48000 34560 R 99.9 0.1 0:00.49 reactor_0 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1813719 1 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1813719 1 busy 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1813719 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1813719 -w 256 00:40:00.059 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1813774 root 20 0 128.2g 48000 34560 R 93.8 0.1 0:00.27 reactor_1' 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1813774 root 20 0 128.2g 48000 34560 R 93.8 0.1 0:00.27 reactor_1 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:00.318 01:50:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1813822 00:40:10.288 Initializing NVMe Controllers 00:40:10.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:10.288 Controller IO queue size 256, less than required. 00:40:10.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:10.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:10.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:10.288 Initialization complete. Launching workers. 00:40:10.288 ======================================================== 00:40:10.289 Latency(us) 00:40:10.289 Device Information : IOPS MiB/s Average min max 00:40:10.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13541.20 52.90 18918.90 4376.79 59557.63 00:40:10.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14112.20 55.13 18152.93 4547.22 22134.46 00:40:10.289 ======================================================== 00:40:10.289 Total : 27653.39 108.02 18528.01 4376.79 59557.63 00:40:10.289 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1813719 0 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1813719 0 idle 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1813719 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1813719 -w 256 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1813719 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.22 reactor_0' 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1813719 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:20.22 reactor_0 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1813719 1 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1813719 1 idle 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1813719 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1813719 -w 256 00:40:10.289 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1813774 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.98 reactor_1' 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1813774 root 20 0 128.2g 48000 34560 S 0.0 0.1 0:09.98 reactor_1 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:10.547 01:50:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:10.547 01:50:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:10.547 01:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:40:10.547 01:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:40:10.547 01:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:40:10.547 01:50:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1813719 0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1813719 0 idle 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1813719 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1813719 -w 256 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1813719 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:20.32 reactor_0' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1813719 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:20.32 reactor_0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1813719 1 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1813719 1 idle 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1813719 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1813719 -w 256 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1813774 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:10.01 reactor_1' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1813774 root 20 0 128.2g 60288 34560 S 0.0 0.1 0:10.01 reactor_1 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:13.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:13.131 rmmod nvme_tcp 00:40:13.131 rmmod nvme_fabrics 00:40:13.131 rmmod nvme_keyring 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 1813719 ']' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 1813719 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1813719 ']' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 1813719 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1813719 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1813719' 00:40:13.131 killing process with pid 1813719 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 1813719 00:40:13.131 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 1813719 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:13.390 01:50:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:15.921 01:51:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:15.921 00:40:15.921 real 0m18.517s 00:40:15.921 user 0m37.192s 00:40:15.921 sys 0m6.472s 00:40:15.921 01:51:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:15.921 01:51:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.921 ************************************ 00:40:15.921 END TEST nvmf_interrupt 00:40:15.921 ************************************ 00:40:15.921 00:40:15.921 real 32m52.937s 00:40:15.921 user 87m28.598s 00:40:15.921 sys 7m55.579s 00:40:15.921 01:51:00 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:15.921 01:51:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.921 ************************************ 00:40:15.921 END TEST nvmf_tcp 00:40:15.921 ************************************ 00:40:15.921 01:51:00 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:40:15.921 01:51:00 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:15.921 01:51:00 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:15.921 01:51:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:15.921 01:51:00 -- common/autotest_common.sh@10 -- # set +x 00:40:15.921 ************************************ 00:40:15.921 START TEST spdkcli_nvmf_tcp 00:40:15.921 ************************************ 00:40:15.921 01:51:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:15.921 * Looking for test storage... 00:40:15.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:15.921 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.922 --rc genhtml_branch_coverage=1 00:40:15.922 --rc genhtml_function_coverage=1 00:40:15.922 --rc genhtml_legend=1 00:40:15.922 --rc geninfo_all_blocks=1 00:40:15.922 --rc geninfo_unexecuted_blocks=1 00:40:15.922 00:40:15.922 ' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.922 --rc genhtml_branch_coverage=1 00:40:15.922 --rc genhtml_function_coverage=1 00:40:15.922 --rc genhtml_legend=1 00:40:15.922 --rc geninfo_all_blocks=1 00:40:15.922 --rc geninfo_unexecuted_blocks=1 00:40:15.922 00:40:15.922 ' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.922 --rc genhtml_branch_coverage=1 00:40:15.922 --rc genhtml_function_coverage=1 00:40:15.922 --rc genhtml_legend=1 00:40:15.922 --rc geninfo_all_blocks=1 00:40:15.922 --rc geninfo_unexecuted_blocks=1 00:40:15.922 00:40:15.922 ' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:15.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:15.922 --rc genhtml_branch_coverage=1 00:40:15.922 --rc genhtml_function_coverage=1 00:40:15.922 --rc genhtml_legend=1 00:40:15.922 --rc geninfo_all_blocks=1 00:40:15.922 --rc geninfo_unexecuted_blocks=1 00:40:15.922 00:40:15.922 ' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:15.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1815886 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1815886 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1815886 ']' 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:15.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.922 [2024-10-13 01:51:01.196167] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:40:15.922 [2024-10-13 01:51:01.196268] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815886 ] 00:40:15.922 [2024-10-13 01:51:01.256327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:15.922 [2024-10-13 01:51:01.306601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.922 [2024-10-13 01:51:01.306605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:15.922 01:51:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:15.922 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:15.922 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:15.922 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:15.922 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:15.922 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:15.922 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:15.922 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:15.922 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:15.922 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:15.922 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:15.922 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:15.922 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:15.922 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:15.922 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:15.922 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:15.922 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:15.922 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:15.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:15.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:15.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:15.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:15.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:15.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:15.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:15.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:15.923 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:15.923 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:15.923 ' 00:40:19.204 [2024-10-13 01:51:04.117576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:20.138 [2024-10-13 01:51:05.398126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:22.665 [2024-10-13 01:51:07.773572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:24.563 [2024-10-13 01:51:09.816061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:25.936 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:25.936 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:25.936 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:25.936 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:25.936 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:25.936 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:25.936 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:25.936 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:25.936 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:25.936 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:25.936 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:25.936 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:25.936 01:51:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:25.936 01:51:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:25.936 01:51:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.936 01:51:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:25.936 01:51:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:25.936 01:51:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:25.936 01:51:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:25.936 01:51:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:26.502 01:51:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:26.502 01:51:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:26.502 01:51:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:26.502 01:51:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:26.502 01:51:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:26.502 01:51:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:26.502 01:51:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:26.502 01:51:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:26.502 01:51:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:26.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:26.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:26.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:26.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:26.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:26.502 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:26.502 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:26.502 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:26.502 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:26.502 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:26.502 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:26.502 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:26.502 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:26.502 ' 00:40:31.764 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:31.764 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:31.764 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:31.764 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:31.764 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:31.764 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:31.764 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:31.764 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:31.764 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:31.764 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:31.764 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:31.764 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:31.764 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:31.764 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1815886 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1815886 ']' 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1815886 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1815886 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1815886' 00:40:32.021 killing process with pid 1815886 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1815886 00:40:32.021 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1815886 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1815886 ']' 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1815886 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1815886 ']' 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1815886 00:40:32.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1815886) - No such process 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1815886 is not found' 00:40:32.279 Process with pid 1815886 is not found 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:32.279 00:40:32.279 real 0m16.692s 00:40:32.279 user 0m35.772s 00:40:32.279 sys 0m0.776s 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:32.279 01:51:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:32.279 ************************************ 00:40:32.279 END TEST spdkcli_nvmf_tcp 00:40:32.279 ************************************ 00:40:32.279 01:51:17 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:32.279 01:51:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:32.279 01:51:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:32.279 01:51:17 -- common/autotest_common.sh@10 -- # set +x 00:40:32.279 ************************************ 00:40:32.279 START TEST nvmf_identify_passthru 00:40:32.279 ************************************ 00:40:32.279 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:32.279 * Looking for test storage... 00:40:32.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:32.279 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:32.279 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:40:32.279 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:32.555 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:32.555 01:51:17 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:32.555 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:32.555 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.555 --rc genhtml_branch_coverage=1 00:40:32.555 --rc genhtml_function_coverage=1 00:40:32.555 --rc genhtml_legend=1 00:40:32.555 --rc geninfo_all_blocks=1 00:40:32.555 --rc geninfo_unexecuted_blocks=1 00:40:32.555 00:40:32.555 ' 00:40:32.555 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.555 --rc genhtml_branch_coverage=1 00:40:32.555 --rc genhtml_function_coverage=1 00:40:32.555 --rc genhtml_legend=1 00:40:32.555 --rc geninfo_all_blocks=1 00:40:32.555 --rc geninfo_unexecuted_blocks=1 00:40:32.555 00:40:32.555 ' 00:40:32.555 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.555 --rc genhtml_branch_coverage=1 00:40:32.555 --rc genhtml_function_coverage=1 00:40:32.555 --rc genhtml_legend=1 00:40:32.555 --rc geninfo_all_blocks=1 00:40:32.555 --rc geninfo_unexecuted_blocks=1 00:40:32.555 00:40:32.555 ' 00:40:32.555 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.555 --rc genhtml_branch_coverage=1 00:40:32.555 --rc genhtml_function_coverage=1 00:40:32.555 --rc genhtml_legend=1 00:40:32.555 --rc geninfo_all_blocks=1 00:40:32.555 --rc geninfo_unexecuted_blocks=1 00:40:32.555 00:40:32.555 ' 00:40:32.555 01:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:32.555 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:32.555 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:32.555 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:32.556 01:51:17 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:32.556 01:51:17 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:32.556 01:51:17 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:32.556 01:51:17 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:32.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:32.556 01:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:32.556 01:51:17 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:32.556 01:51:17 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:32.556 01:51:17 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:32.556 01:51:17 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:32.556 01:51:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.556 01:51:17 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.556 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:32.556 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:32.556 01:51:17 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:32.556 01:51:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:34.461 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:34.462 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:34.462 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:34.462 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:34.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:34.462 01:51:19 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:34.462 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:34.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:34.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:40:34.722 00:40:34.722 --- 10.0.0.2 ping statistics --- 00:40:34.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:34.722 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:34.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:34.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:40:34.722 00:40:34.722 --- 10.0.0.1 ping statistics --- 00:40:34.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:34.722 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:34.722 01:51:20 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:34.722 01:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:34.722 01:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:88:00.0 00:40:34.722 01:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:88:00.0 00:40:34.722 01:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:40:34.722 01:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:40:34.722 01:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:34.722 01:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:34.722 01:51:20 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:38.910 01:51:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:40:38.910 01:51:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:40:38.910 01:51:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:38.910 01:51:24 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:43.095 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:40:43.095 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:43.095 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:43.095 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1821059 00:40:43.095 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:43.095 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:43.095 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1821059 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1821059 ']' 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:43.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:43.095 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:43.095 [2024-10-13 01:51:28.590821] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:40:43.095 [2024-10-13 01:51:28.590906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:43.095 [2024-10-13 01:51:28.664628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:43.353 [2024-10-13 01:51:28.717870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:43.353 [2024-10-13 01:51:28.717935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:43.354 [2024-10-13 01:51:28.717963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:43.354 [2024-10-13 01:51:28.717977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:43.354 [2024-10-13 01:51:28.717989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:43.354 [2024-10-13 01:51:28.719616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:43.354 [2024-10-13 01:51:28.719675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:43.354 [2024-10-13 01:51:28.719728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:43.354 [2024-10-13 01:51:28.719732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:40:43.354 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:43.354 INFO: Log level set to 20 00:40:43.354 INFO: Requests: 00:40:43.354 { 00:40:43.354 "jsonrpc": "2.0", 00:40:43.354 "method": "nvmf_set_config", 00:40:43.354 "id": 1, 00:40:43.354 "params": { 00:40:43.354 "admin_cmd_passthru": { 00:40:43.354 "identify_ctrlr": true 00:40:43.354 } 00:40:43.354 } 00:40:43.354 } 00:40:43.354 00:40:43.354 INFO: response: 00:40:43.354 { 00:40:43.354 "jsonrpc": "2.0", 00:40:43.354 "id": 1, 00:40:43.354 "result": true 00:40:43.354 } 00:40:43.354 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.354 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:43.354 INFO: Setting log level to 20 00:40:43.354 INFO: Setting log level to 20 00:40:43.354 INFO: Log level set to 20 00:40:43.354 INFO: Log level set to 20 00:40:43.354 INFO: Requests: 00:40:43.354 { 00:40:43.354 "jsonrpc": "2.0", 00:40:43.354 "method": "framework_start_init", 00:40:43.354 "id": 1 00:40:43.354 } 00:40:43.354 00:40:43.354 INFO: Requests: 00:40:43.354 { 00:40:43.354 "jsonrpc": "2.0", 00:40:43.354 "method": "framework_start_init", 00:40:43.354 "id": 1 00:40:43.354 } 00:40:43.354 00:40:43.354 [2024-10-13 01:51:28.918005] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:43.354 INFO: response: 00:40:43.354 { 00:40:43.354 "jsonrpc": "2.0", 00:40:43.354 "id": 1, 00:40:43.354 "result": true 00:40:43.354 } 00:40:43.354 00:40:43.354 INFO: response: 00:40:43.354 { 00:40:43.354 "jsonrpc": "2.0", 00:40:43.354 "id": 1, 00:40:43.354 "result": true 00:40:43.354 } 00:40:43.354 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.354 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.354 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:43.354 INFO: Setting log level to 40 00:40:43.354 INFO: Setting log level to 40 00:40:43.354 INFO: Setting log level to 40 00:40:43.354 [2024-10-13 01:51:28.928208] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:43.612 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:43.612 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:43.612 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:43.612 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:43.612 01:51:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:40:43.612 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:43.612 01:51:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.894 Nvme0n1 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.894 01:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.894 01:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.894 01:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.894 [2024-10-13 01:51:31.827611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.894 01:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.894 [ 00:40:46.894 { 00:40:46.894 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:46.894 "subtype": "Discovery", 00:40:46.894 "listen_addresses": [], 00:40:46.894 "allow_any_host": true, 00:40:46.894 "hosts": [] 00:40:46.894 }, 00:40:46.894 { 00:40:46.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:46.894 "subtype": "NVMe", 00:40:46.894 "listen_addresses": [ 00:40:46.894 { 00:40:46.894 "trtype": "TCP", 00:40:46.894 "adrfam": "IPv4", 00:40:46.894 "traddr": "10.0.0.2", 00:40:46.894 "trsvcid": "4420" 00:40:46.894 } 00:40:46.894 ], 00:40:46.894 "allow_any_host": true, 00:40:46.894 "hosts": [], 00:40:46.894 "serial_number": "SPDK00000000000001", 00:40:46.894 "model_number": "SPDK bdev Controller", 00:40:46.894 "max_namespaces": 1, 00:40:46.894 "min_cntlid": 1, 00:40:46.894 "max_cntlid": 65519, 00:40:46.894 "namespaces": [ 00:40:46.894 { 00:40:46.894 "nsid": 1, 00:40:46.894 "bdev_name": "Nvme0n1", 00:40:46.894 "name": "Nvme0n1", 00:40:46.894 "nguid": "672FC01E6BDB4E96BC2662F627A20E20", 00:40:46.894 "uuid": "672fc01e-6bdb-4e96-bc26-62f627a20e20" 00:40:46.894 } 00:40:46.894 ] 00:40:46.894 } 00:40:46.894 ] 00:40:46.894 01:51:31 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.894 01:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:46.894 01:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:46.894 01:51:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:46.894 01:51:32 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:46.894 rmmod nvme_tcp 00:40:46.894 rmmod nvme_fabrics 00:40:46.894 rmmod nvme_keyring 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 1821059 ']' 00:40:46.894 01:51:32 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 1821059 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1821059 ']' 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1821059 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1821059 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1821059' 00:40:46.894 killing process with pid 1821059 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1821059 00:40:46.894 01:51:32 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1821059 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:48.268 01:51:33 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:48.268 01:51:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:48.268 01:51:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:50.856 01:51:35 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:50.856 00:40:50.856 real 0m18.141s 00:40:50.856 user 0m27.071s 00:40:50.856 sys 0m2.343s 00:40:50.856 01:51:35 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:50.856 01:51:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:50.856 ************************************ 00:40:50.856 END TEST nvmf_identify_passthru 00:40:50.856 ************************************ 00:40:50.856 01:51:35 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:50.856 01:51:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:50.856 01:51:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:50.856 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:40:50.856 ************************************ 00:40:50.856 START TEST nvmf_dif 00:40:50.856 ************************************ 00:40:50.856 01:51:35 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:50.856 * Looking for test storage... 00:40:50.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:50.856 01:51:35 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:40:50.856 01:51:35 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:40:50.856 01:51:35 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:40:50.856 01:51:36 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:50.856 01:51:36 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:50.857 01:51:36 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:50.857 01:51:36 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:50.857 01:51:36 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:50.857 01:51:36 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:50.857 01:51:36 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:50.857 01:51:36 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:40:50.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.857 --rc genhtml_branch_coverage=1 00:40:50.857 --rc genhtml_function_coverage=1 00:40:50.857 --rc genhtml_legend=1 00:40:50.857 --rc geninfo_all_blocks=1 00:40:50.857 --rc geninfo_unexecuted_blocks=1 00:40:50.857 00:40:50.857 ' 00:40:50.857 01:51:36 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:40:50.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.857 --rc genhtml_branch_coverage=1 00:40:50.857 --rc genhtml_function_coverage=1 00:40:50.857 --rc genhtml_legend=1 00:40:50.857 --rc geninfo_all_blocks=1 00:40:50.857 --rc geninfo_unexecuted_blocks=1 00:40:50.857 00:40:50.857 ' 00:40:50.857 01:51:36 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:40:50.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.857 --rc genhtml_branch_coverage=1 00:40:50.857 --rc genhtml_function_coverage=1 00:40:50.857 --rc genhtml_legend=1 00:40:50.857 --rc geninfo_all_blocks=1 00:40:50.857 --rc geninfo_unexecuted_blocks=1 00:40:50.857 00:40:50.857 ' 00:40:50.857 01:51:36 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:40:50.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:50.857 --rc genhtml_branch_coverage=1 00:40:50.857 --rc genhtml_function_coverage=1 00:40:50.857 --rc genhtml_legend=1 00:40:50.857 --rc geninfo_all_blocks=1 00:40:50.857 --rc geninfo_unexecuted_blocks=1 00:40:50.857 00:40:50.857 ' 00:40:50.857 01:51:36 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:50.857 01:51:36 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:50.857 01:51:36 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:50.857 01:51:36 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:50.857 01:51:36 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:50.857 01:51:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.857 01:51:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.857 01:51:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.857 01:51:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:50.857 01:51:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:50.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:50.857 01:51:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:50.857 01:51:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:50.857 01:51:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:50.857 01:51:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:50.857 01:51:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:50.857 01:51:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:50.857 01:51:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:40:50.857 01:51:36 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:40:50.857 01:51:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:52.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:52.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:52.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:52.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:52.758 01:51:38 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:52.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:52.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:40:52.759 00:40:52.759 --- 10.0.0.2 ping statistics --- 00:40:52.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:52.759 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:52.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:52.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:40:52.759 00:40:52.759 --- 10.0.0.1 ping statistics --- 00:40:52.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:52.759 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:40:52.759 01:51:38 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:40:53.693 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:53.693 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:40:53.693 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:53.693 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:53.693 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:53.952 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:53.952 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:53.952 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:53.952 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:53.952 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:40:53.952 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:40:53.952 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:40:53.952 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:40:53.952 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:40:53.952 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:40:53.952 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:40:53.952 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:40:53.952 01:51:39 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:40:53.952 01:51:39 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:40:53.952 01:51:39 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:40:53.952 01:51:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1824215 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:40:53.952 01:51:39 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1824215 00:40:53.952 01:51:39 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1824215 ']' 00:40:53.952 01:51:39 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:53.952 01:51:39 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:53.952 01:51:39 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:53.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:53.952 01:51:39 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:53.952 01:51:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:54.211 [2024-10-13 01:51:39.541584] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:40:54.211 [2024-10-13 01:51:39.541663] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:54.211 [2024-10-13 01:51:39.604960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.211 [2024-10-13 01:51:39.651156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:54.211 [2024-10-13 01:51:39.651217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:54.211 [2024-10-13 01:51:39.651241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:54.211 [2024-10-13 01:51:39.651251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:54.211 [2024-10-13 01:51:39.651260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:54.211 [2024-10-13 01:51:39.651860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.211 01:51:39 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:54.211 01:51:39 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:40:54.211 01:51:39 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:40:54.211 01:51:39 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:54.211 01:51:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:54.470 01:51:39 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:54.470 01:51:39 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:40:54.470 01:51:39 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:40:54.470 01:51:39 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.470 01:51:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:54.470 [2024-10-13 01:51:39.813641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:54.470 01:51:39 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.470 01:51:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:40:54.470 01:51:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:40:54.470 01:51:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:54.470 01:51:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:40:54.470 ************************************ 00:40:54.470 START TEST fio_dif_1_default 00:40:54.470 ************************************ 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:54.470 bdev_null0 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:40:54.470 [2024-10-13 01:51:39.874016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:40:54.470 01:51:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:40:54.470 { 00:40:54.470 "params": { 00:40:54.470 "name": "Nvme$subsystem", 00:40:54.471 "trtype": "$TEST_TRANSPORT", 00:40:54.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:54.471 "adrfam": "ipv4", 00:40:54.471 "trsvcid": "$NVMF_PORT", 00:40:54.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:54.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:54.471 "hdgst": ${hdgst:-false}, 00:40:54.471 "ddgst": ${ddgst:-false} 00:40:54.471 }, 00:40:54.471 "method": "bdev_nvme_attach_controller" 00:40:54.471 } 00:40:54.471 EOF 00:40:54.471 )") 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:40:54.471 "params": { 00:40:54.471 "name": "Nvme0", 00:40:54.471 "trtype": "tcp", 00:40:54.471 "traddr": "10.0.0.2", 00:40:54.471 "adrfam": "ipv4", 00:40:54.471 "trsvcid": "4420", 00:40:54.471 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:54.471 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:54.471 "hdgst": false, 00:40:54.471 "ddgst": false 00:40:54.471 }, 00:40:54.471 "method": "bdev_nvme_attach_controller" 00:40:54.471 }' 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:54.471 01:51:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:40:54.729 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:40:54.729 fio-3.35 00:40:54.729 Starting 1 thread 00:41:06.924 00:41:06.924 filename0: (groupid=0, jobs=1): err= 0: pid=1824443: Sun Oct 13 01:51:50 2024 00:41:06.924 read: IOPS=191, BW=767KiB/s (785kB/s)(7680KiB/10014msec) 00:41:06.924 slat (nsec): min=5629, max=87153, avg=8782.58, stdev=3123.09 00:41:06.924 clat (usec): min=499, max=44600, avg=20834.33, stdev=20425.99 00:41:06.924 lat (usec): min=506, max=44629, avg=20843.12, stdev=20425.78 00:41:06.924 clat percentiles (usec): 00:41:06.924 | 1.00th=[ 545], 5.00th=[ 562], 10.00th=[ 570], 20.00th=[ 578], 00:41:06.924 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 676], 60.00th=[41157], 00:41:06.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:06.924 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:41:06.924 | 99.99th=[44827] 00:41:06.924 bw ( KiB/s): min= 672, max= 832, per=99.88%, avg=766.40, stdev=38.11, samples=20 00:41:06.924 iops : min= 168, max= 208, avg=191.60, stdev= 9.53, samples=20 00:41:06.924 lat (usec) : 500=0.05%, 750=50.16%, 1000=0.21% 00:41:06.924 lat (msec) : 50=49.58% 00:41:06.924 cpu : usr=91.05%, sys=8.61%, ctx=15, majf=0, minf=274 00:41:06.924 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:06.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.924 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:06.924 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:06.924 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:06.924 00:41:06.924 Run status group 0 (all jobs): 00:41:06.924 READ: bw=767KiB/s (785kB/s), 767KiB/s-767KiB/s (785kB/s-785kB/s), io=7680KiB (7864kB), run=10014-10014msec 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.924 00:41:06.924 real 0m11.220s 00:41:06.924 user 0m10.362s 00:41:06.924 sys 0m1.138s 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.924 ************************************ 00:41:06.924 END TEST fio_dif_1_default 00:41:06.924 ************************************ 00:41:06.924 01:51:51 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:06.924 01:51:51 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:06.924 01:51:51 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:06.924 01:51:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.924 ************************************ 00:41:06.924 START TEST fio_dif_1_multi_subsystems 00:41:06.924 ************************************ 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:06.924 bdev_null0 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:06.924 [2024-10-13 01:51:51.136680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:06.924 bdev_null1 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.924 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:06.925 { 00:41:06.925 "params": { 00:41:06.925 "name": "Nvme$subsystem", 00:41:06.925 "trtype": "$TEST_TRANSPORT", 00:41:06.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.925 "adrfam": "ipv4", 00:41:06.925 "trsvcid": "$NVMF_PORT", 00:41:06.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.925 "hdgst": ${hdgst:-false}, 00:41:06.925 "ddgst": ${ddgst:-false} 00:41:06.925 }, 00:41:06.925 "method": "bdev_nvme_attach_controller" 00:41:06.925 } 00:41:06.925 EOF 00:41:06.925 )") 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:06.925 { 00:41:06.925 "params": { 00:41:06.925 "name": "Nvme$subsystem", 00:41:06.925 "trtype": "$TEST_TRANSPORT", 00:41:06.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.925 "adrfam": "ipv4", 00:41:06.925 "trsvcid": "$NVMF_PORT", 00:41:06.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.925 "hdgst": ${hdgst:-false}, 00:41:06.925 "ddgst": ${ddgst:-false} 00:41:06.925 }, 00:41:06.925 "method": "bdev_nvme_attach_controller" 00:41:06.925 } 00:41:06.925 EOF 00:41:06.925 )") 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:06.925 "params": { 00:41:06.925 "name": "Nvme0", 00:41:06.925 "trtype": "tcp", 00:41:06.925 "traddr": "10.0.0.2", 00:41:06.925 "adrfam": "ipv4", 00:41:06.925 "trsvcid": "4420", 00:41:06.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:06.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:06.925 "hdgst": false, 00:41:06.925 "ddgst": false 00:41:06.925 }, 00:41:06.925 "method": "bdev_nvme_attach_controller" 00:41:06.925 },{ 00:41:06.925 "params": { 00:41:06.925 "name": "Nvme1", 00:41:06.925 "trtype": "tcp", 00:41:06.925 "traddr": "10.0.0.2", 00:41:06.925 "adrfam": "ipv4", 00:41:06.925 "trsvcid": "4420", 00:41:06.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:06.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:06.925 "hdgst": false, 00:41:06.925 "ddgst": false 00:41:06.925 }, 00:41:06.925 "method": "bdev_nvme_attach_controller" 00:41:06.925 }' 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:06.925 01:51:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.925 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:06.925 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:06.925 fio-3.35 00:41:06.925 Starting 2 threads 00:41:16.890 00:41:16.890 filename0: (groupid=0, jobs=1): err= 0: pid=1825921: Sun Oct 13 01:52:02 2024 00:41:16.890 read: IOPS=193, BW=775KiB/s (794kB/s)(7776KiB/10032msec) 00:41:16.890 slat (nsec): min=7201, max=95888, avg=10488.24, stdev=5314.76 00:41:16.890 clat (usec): min=554, max=46345, avg=20608.03, stdev=20378.73 00:41:16.890 lat (usec): min=562, max=46381, avg=20618.51, stdev=20377.80 00:41:16.890 clat percentiles (usec): 00:41:16.890 | 1.00th=[ 570], 5.00th=[ 578], 10.00th=[ 594], 20.00th=[ 603], 00:41:16.890 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 1012], 60.00th=[41157], 00:41:16.890 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:16.890 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:41:16.890 | 99.99th=[46400] 00:41:16.890 bw ( KiB/s): min= 673, max= 896, per=50.38%, avg=776.05, stdev=46.17, samples=20 00:41:16.890 iops : min= 168, max= 224, avg=194.00, stdev=11.57, samples=20 00:41:16.890 lat (usec) : 750=45.37%, 1000=4.32% 00:41:16.890 lat (msec) : 2=1.34%, 50=48.97% 00:41:16.890 cpu : usr=96.77%, sys=2.93%, ctx=18, majf=0, minf=189 00:41:16.890 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:16.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.890 issued rwts: total=1944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:16.890 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:16.890 filename1: (groupid=0, jobs=1): err= 0: pid=1825922: Sun Oct 13 01:52:02 2024 00:41:16.890 read: IOPS=191, BW=765KiB/s (784kB/s)(7680KiB/10034msec) 00:41:16.890 slat (nsec): min=4803, max=50192, avg=12640.62, stdev=5819.61 00:41:16.890 clat (usec): min=584, max=47446, avg=20863.33, stdev=20357.17 00:41:16.890 lat (usec): min=598, max=47461, avg=20875.97, stdev=20356.01 00:41:16.890 clat percentiles (usec): 00:41:16.890 | 1.00th=[ 603], 5.00th=[ 611], 10.00th=[ 627], 20.00th=[ 635], 00:41:16.890 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 1057], 60.00th=[41157], 00:41:16.890 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:16.890 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:41:16.890 | 99.99th=[47449] 00:41:16.890 bw ( KiB/s): min= 672, max= 832, per=49.73%, avg=766.40, stdev=35.17, samples=20 00:41:16.890 iops : min= 168, max= 208, avg=191.60, stdev= 8.79, samples=20 00:41:16.890 lat (usec) : 750=43.23%, 1000=6.04% 00:41:16.890 lat (msec) : 2=1.15%, 50=49.58% 00:41:16.890 cpu : usr=97.11%, sys=2.38%, ctx=81, majf=0, minf=112 00:41:16.890 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:16.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:16.890 issued rwts: total=1920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:16.890 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:16.890 00:41:16.890 Run status group 0 (all jobs): 00:41:16.890 READ: bw=1540KiB/s (1577kB/s), 765KiB/s-775KiB/s (784kB/s-794kB/s), io=15.1MiB (15.8MB), run=10032-10034msec 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.149 00:41:17.149 real 0m11.420s 00:41:17.149 user 0m20.837s 00:41:17.149 sys 0m0.830s 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 ************************************ 00:41:17.149 END TEST fio_dif_1_multi_subsystems 00:41:17.149 ************************************ 00:41:17.149 01:52:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:17.149 01:52:02 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:17.149 01:52:02 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 ************************************ 00:41:17.149 START TEST fio_dif_rand_params 00:41:17.149 ************************************ 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 bdev_null0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:17.149 [2024-10-13 01:52:02.599974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:17.149 { 00:41:17.149 "params": { 00:41:17.149 "name": "Nvme$subsystem", 00:41:17.149 "trtype": "$TEST_TRANSPORT", 00:41:17.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:17.149 "adrfam": "ipv4", 00:41:17.149 "trsvcid": "$NVMF_PORT", 00:41:17.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:17.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:17.149 "hdgst": ${hdgst:-false}, 00:41:17.149 "ddgst": ${ddgst:-false} 00:41:17.149 }, 00:41:17.149 "method": "bdev_nvme_attach_controller" 00:41:17.149 } 00:41:17.149 EOF 00:41:17.149 )") 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:17.149 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:17.150 "params": { 00:41:17.150 "name": "Nvme0", 00:41:17.150 "trtype": "tcp", 00:41:17.150 "traddr": "10.0.0.2", 00:41:17.150 "adrfam": "ipv4", 00:41:17.150 "trsvcid": "4420", 00:41:17.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:17.150 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:17.150 "hdgst": false, 00:41:17.150 "ddgst": false 00:41:17.150 }, 00:41:17.150 "method": "bdev_nvme_attach_controller" 00:41:17.150 }' 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:17.150 01:52:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:17.408 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:17.408 ... 00:41:17.408 fio-3.35 00:41:17.408 Starting 3 threads 00:41:23.966 00:41:23.966 filename0: (groupid=0, jobs=1): err= 0: pid=1827238: Sun Oct 13 01:52:08 2024 00:41:23.966 read: IOPS=172, BW=21.5MiB/s (22.6MB/s)(109MiB/5048msec) 00:41:23.966 slat (nsec): min=4897, max=58999, avg=16443.19, stdev=4802.78 00:41:23.966 clat (usec): min=6673, max=92993, avg=17354.02, stdev=13310.06 00:41:23.966 lat (usec): min=6685, max=93006, avg=17370.46, stdev=13310.00 00:41:23.966 clat percentiles (usec): 00:41:23.966 | 1.00th=[ 7570], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[11731], 00:41:23.966 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[13566], 00:41:23.966 | 70.00th=[14091], 80.00th=[15270], 90.00th=[49021], 95.00th=[52691], 00:41:23.966 | 99.00th=[53740], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799], 00:41:23.966 | 99.99th=[92799] 00:41:23.966 bw ( KiB/s): min=13056, max=29184, per=27.39%, avg=22169.60, stdev=4531.83, samples=10 00:41:23.966 iops : min= 102, max= 228, avg=173.20, stdev=35.40, samples=10 00:41:23.966 lat (msec) : 10=7.59%, 20=81.36%, 50=2.53%, 100=8.52% 00:41:23.966 cpu : usr=94.83%, sys=4.74%, ctx=11, majf=0, minf=105 00:41:23.966 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:23.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:23.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:23.966 issued rwts: total=869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:23.966 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:23.966 filename0: (groupid=0, jobs=1): err= 0: pid=1827239: Sun Oct 13 01:52:08 2024 00:41:23.966 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(150MiB/5005msec) 00:41:23.966 slat (nsec): min=8391, max=88646, avg=20755.13, stdev=5993.37 00:41:23.966 clat (usec): min=4922, max=57107, avg=12453.19, stdev=6562.31 00:41:23.966 lat (usec): min=4936, max=57149, avg=12473.95, stdev=6561.79 00:41:23.966 clat percentiles (usec): 00:41:23.966 | 1.00th=[ 5604], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9241], 00:41:23.966 | 30.00th=[10159], 40.00th=[11600], 50.00th=[12125], 60.00th=[12518], 00:41:23.966 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13829], 95.00th=[14615], 00:41:23.966 | 99.00th=[52167], 99.50th=[52691], 99.90th=[56886], 99.95th=[56886], 00:41:23.966 | 99.99th=[56886] 00:41:23.966 bw ( KiB/s): min=17920, max=36864, per=37.99%, avg=30745.60, stdev=5384.05, samples=10 00:41:23.966 iops : min= 140, max= 288, avg=240.20, stdev=42.06, samples=10 00:41:23.966 lat (msec) : 10=29.26%, 20=68.25%, 50=0.67%, 100=1.83% 00:41:23.966 cpu : usr=93.80%, sys=5.02%, ctx=121, majf=0, minf=171 00:41:23.966 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:23.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:23.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:23.966 issued rwts: total=1203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:23.966 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:23.966 filename0: (groupid=0, jobs=1): err= 0: pid=1827240: Sun Oct 13 01:52:08 2024 00:41:23.966 read: IOPS=221, BW=27.7MiB/s (29.1MB/s)(140MiB/5046msec) 00:41:23.966 slat (nsec): min=8366, max=74976, avg=16577.08, stdev=4426.95 00:41:23.966 clat (usec): min=4279, max=92196, avg=13455.05, stdev=6767.57 00:41:23.966 lat (usec): min=4292, max=92230, avg=13471.63, stdev=6767.97 00:41:23.966 clat percentiles (usec): 00:41:23.966 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 8717], 20.00th=[ 9765], 00:41:23.966 | 30.00th=[10814], 40.00th=[12518], 50.00th=[13435], 60.00th=[13829], 00:41:23.966 | 70.00th=[14353], 80.00th=[15139], 90.00th=[17171], 95.00th=[18220], 00:41:23.966 | 99.00th=[52167], 99.50th=[54789], 99.90th=[63177], 99.95th=[91751], 00:41:23.966 | 99.99th=[91751] 00:41:23.966 bw ( KiB/s): min=24576, max=33792, per=35.36%, avg=28620.80, stdev=2361.13, samples=10 00:41:23.966 iops : min= 192, max= 264, avg=223.60, stdev=18.45, samples=10 00:41:23.966 lat (msec) : 10=22.50%, 20=75.45%, 50=0.54%, 100=1.52% 00:41:23.966 cpu : usr=94.43%, sys=4.60%, ctx=34, majf=0, minf=77 00:41:23.966 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:23.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:23.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:23.966 issued rwts: total=1120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:23.966 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:23.966 00:41:23.966 Run status group 0 (all jobs): 00:41:23.966 READ: bw=79.0MiB/s (82.9MB/s), 21.5MiB/s-30.0MiB/s (22.6MB/s-31.5MB/s), io=399MiB (418MB), run=5005-5048msec 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 bdev_null0 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 [2024-10-13 01:52:08.718135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 bdev_null1 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.966 bdev_null2 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:23.966 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:23.967 { 00:41:23.967 "params": { 00:41:23.967 "name": "Nvme$subsystem", 00:41:23.967 "trtype": "$TEST_TRANSPORT", 00:41:23.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:23.967 "adrfam": "ipv4", 00:41:23.967 "trsvcid": "$NVMF_PORT", 00:41:23.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:23.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:23.967 "hdgst": ${hdgst:-false}, 00:41:23.967 "ddgst": ${ddgst:-false} 00:41:23.967 }, 00:41:23.967 "method": "bdev_nvme_attach_controller" 00:41:23.967 } 00:41:23.967 EOF 00:41:23.967 )") 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:23.967 { 00:41:23.967 "params": { 00:41:23.967 "name": "Nvme$subsystem", 00:41:23.967 "trtype": "$TEST_TRANSPORT", 00:41:23.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:23.967 "adrfam": "ipv4", 00:41:23.967 "trsvcid": "$NVMF_PORT", 00:41:23.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:23.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:23.967 "hdgst": ${hdgst:-false}, 00:41:23.967 "ddgst": ${ddgst:-false} 00:41:23.967 }, 00:41:23.967 "method": "bdev_nvme_attach_controller" 00:41:23.967 } 00:41:23.967 EOF 00:41:23.967 )") 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:23.967 { 00:41:23.967 "params": { 00:41:23.967 "name": "Nvme$subsystem", 00:41:23.967 "trtype": "$TEST_TRANSPORT", 00:41:23.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:23.967 "adrfam": "ipv4", 00:41:23.967 "trsvcid": "$NVMF_PORT", 00:41:23.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:23.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:23.967 "hdgst": ${hdgst:-false}, 00:41:23.967 "ddgst": ${ddgst:-false} 00:41:23.967 }, 00:41:23.967 "method": "bdev_nvme_attach_controller" 00:41:23.967 } 00:41:23.967 EOF 00:41:23.967 )") 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:23.967 "params": { 00:41:23.967 "name": "Nvme0", 00:41:23.967 "trtype": "tcp", 00:41:23.967 "traddr": "10.0.0.2", 00:41:23.967 "adrfam": "ipv4", 00:41:23.967 "trsvcid": "4420", 00:41:23.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:23.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:23.967 "hdgst": false, 00:41:23.967 "ddgst": false 00:41:23.967 }, 00:41:23.967 "method": "bdev_nvme_attach_controller" 00:41:23.967 },{ 00:41:23.967 "params": { 00:41:23.967 "name": "Nvme1", 00:41:23.967 "trtype": "tcp", 00:41:23.967 "traddr": "10.0.0.2", 00:41:23.967 "adrfam": "ipv4", 00:41:23.967 "trsvcid": "4420", 00:41:23.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:23.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:23.967 "hdgst": false, 00:41:23.967 "ddgst": false 00:41:23.967 }, 00:41:23.967 "method": "bdev_nvme_attach_controller" 00:41:23.967 },{ 00:41:23.967 "params": { 00:41:23.967 "name": "Nvme2", 00:41:23.967 "trtype": "tcp", 00:41:23.967 "traddr": "10.0.0.2", 00:41:23.967 "adrfam": "ipv4", 00:41:23.967 "trsvcid": "4420", 00:41:23.967 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:23.967 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:23.967 "hdgst": false, 00:41:23.967 "ddgst": false 00:41:23.967 }, 00:41:23.967 "method": "bdev_nvme_attach_controller" 00:41:23.967 }' 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:23.967 01:52:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:23.967 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:23.967 ... 00:41:23.967 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:23.967 ... 00:41:23.967 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:23.967 ... 00:41:23.967 fio-3.35 00:41:23.967 Starting 24 threads 00:41:36.175 00:41:36.175 filename0: (groupid=0, jobs=1): err= 0: pid=1828097: Sun Oct 13 01:52:20 2024 00:41:36.175 read: IOPS=67, BW=271KiB/s (278kB/s)(2752KiB/10140msec) 00:41:36.175 slat (usec): min=7, max=118, avg=55.18, stdev=22.36 00:41:36.175 clat (msec): min=97, max=370, avg=235.37, stdev=51.26 00:41:36.175 lat (msec): min=97, max=371, avg=235.42, stdev=51.27 00:41:36.175 clat percentiles (msec): 00:41:36.175 | 1.00th=[ 99], 5.00th=[ 163], 10.00th=[ 171], 20.00th=[ 184], 00:41:36.175 | 30.00th=[ 201], 40.00th=[ 213], 50.00th=[ 259], 60.00th=[ 266], 00:41:36.175 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 284], 95.00th=[ 305], 00:41:36.175 | 99.00th=[ 351], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:41:36.175 | 99.99th=[ 372] 00:41:36.175 bw ( KiB/s): min= 144, max= 384, per=4.34%, avg=268.80, stdev=53.85, samples=20 00:41:36.175 iops : min= 36, max= 96, avg=67.20, stdev=13.46, samples=20 00:41:36.175 lat (msec) : 100=2.33%, 250=45.64%, 500=52.03% 00:41:36.175 cpu : usr=98.30%, sys=1.19%, ctx=16, majf=0, minf=36 00:41:36.175 IO depths : 1=4.1%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:41:36.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.175 filename0: (groupid=0, jobs=1): err= 0: pid=1828098: Sun Oct 13 01:52:20 2024 00:41:36.175 read: IOPS=59, BW=240KiB/s (246kB/s)(2424KiB/10105msec) 00:41:36.175 slat (nsec): min=8641, max=86832, avg=30162.64, stdev=15405.55 00:41:36.175 clat (msec): min=150, max=380, avg=266.21, stdev=41.32 00:41:36.175 lat (msec): min=150, max=380, avg=266.24, stdev=41.32 00:41:36.175 clat percentiles (msec): 00:41:36.175 | 1.00th=[ 165], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 234], 00:41:36.175 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 279], 00:41:36.175 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 317], 00:41:36.175 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 380], 99.95th=[ 380], 00:41:36.175 | 99.99th=[ 380] 00:41:36.175 bw ( KiB/s): min= 128, max= 256, per=3.81%, avg=236.00, stdev=44.62, samples=20 00:41:36.175 iops : min= 32, max= 64, avg=59.00, stdev=11.15, samples=20 00:41:36.175 lat (msec) : 250=22.11%, 500=77.89% 00:41:36.175 cpu : usr=98.40%, sys=1.14%, ctx=28, majf=0, minf=40 00:41:36.175 IO depths : 1=3.5%, 2=9.6%, 4=24.6%, 8=53.5%, 16=8.9%, 32=0.0%, >=64=0.0% 00:41:36.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.175 filename0: (groupid=0, jobs=1): err= 0: pid=1828099: Sun Oct 13 01:52:20 2024 00:41:36.175 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10122msec) 00:41:36.175 slat (usec): min=10, max=111, avg=67.77, stdev=20.28 00:41:36.175 clat (msec): min=131, max=405, avg=265.82, stdev=46.75 00:41:36.175 lat (msec): min=131, max=405, avg=265.89, stdev=46.76 00:41:36.175 clat percentiles (msec): 00:41:36.175 | 1.00th=[ 163], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 234], 00:41:36.175 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 275], 60.00th=[ 279], 00:41:36.175 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 351], 00:41:36.175 | 99.00th=[ 393], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:41:36.175 | 99.99th=[ 405] 00:41:36.175 bw ( KiB/s): min= 128, max= 368, per=3.82%, avg=236.80, stdev=57.71, samples=20 00:41:36.175 iops : min= 32, max= 92, avg=59.20, stdev=14.43, samples=20 00:41:36.175 lat (msec) : 250=22.70%, 500=77.30% 00:41:36.175 cpu : usr=97.88%, sys=1.49%, ctx=60, majf=0, minf=22 00:41:36.175 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:41:36.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.175 filename0: (groupid=0, jobs=1): err= 0: pid=1828100: Sun Oct 13 01:52:20 2024 00:41:36.175 read: IOPS=60, BW=241KiB/s (246kB/s)(2432KiB/10109msec) 00:41:36.175 slat (nsec): min=4083, max=99321, avg=43416.25, stdev=30912.02 00:41:36.175 clat (msec): min=167, max=306, avg=265.61, stdev=34.24 00:41:36.175 lat (msec): min=167, max=306, avg=265.65, stdev=34.24 00:41:36.175 clat percentiles (msec): 00:41:36.175 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 203], 20.00th=[ 255], 00:41:36.175 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 275], 60.00th=[ 279], 00:41:36.175 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:41:36.175 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:41:36.175 | 99.99th=[ 309] 00:41:36.175 bw ( KiB/s): min= 128, max= 384, per=3.82%, avg=236.80, stdev=62.64, samples=20 00:41:36.175 iops : min= 32, max= 96, avg=59.20, stdev=15.66, samples=20 00:41:36.175 lat (msec) : 250=18.42%, 500=81.58% 00:41:36.175 cpu : usr=98.00%, sys=1.48%, ctx=39, majf=0, minf=22 00:41:36.175 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:36.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.175 filename0: (groupid=0, jobs=1): err= 0: pid=1828101: Sun Oct 13 01:52:20 2024 00:41:36.175 read: IOPS=83, BW=335KiB/s (343kB/s)(3392KiB/10134msec) 00:41:36.175 slat (nsec): min=4051, max=95695, avg=15150.01, stdev=14172.89 00:41:36.175 clat (msec): min=99, max=302, avg=189.69, stdev=38.60 00:41:36.175 lat (msec): min=99, max=302, avg=189.70, stdev=38.59 00:41:36.175 clat percentiles (msec): 00:41:36.175 | 1.00th=[ 101], 5.00th=[ 127], 10.00th=[ 136], 20.00th=[ 163], 00:41:36.175 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:41:36.175 | 70.00th=[ 199], 80.00th=[ 213], 90.00th=[ 239], 95.00th=[ 268], 00:41:36.175 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 305], 00:41:36.175 | 99.99th=[ 305] 00:41:36.175 bw ( KiB/s): min= 256, max= 384, per=5.38%, avg=332.80, stdev=42.36, samples=20 00:41:36.175 iops : min= 64, max= 96, avg=83.20, stdev=10.59, samples=20 00:41:36.175 lat (msec) : 100=1.65%, 250=90.57%, 500=7.78% 00:41:36.175 cpu : usr=98.36%, sys=1.25%, ctx=43, majf=0, minf=39 00:41:36.175 IO depths : 1=0.4%, 2=1.2%, 4=8.0%, 8=77.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:41:36.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 complete : 0=0.0%, 4=89.1%, 8=6.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.175 filename0: (groupid=0, jobs=1): err= 0: pid=1828102: Sun Oct 13 01:52:20 2024 00:41:36.175 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10123msec) 00:41:36.175 slat (usec): min=6, max=128, avg=73.22, stdev=14.68 00:41:36.175 clat (msec): min=127, max=389, avg=265.77, stdev=38.85 00:41:36.175 lat (msec): min=127, max=389, avg=265.84, stdev=38.86 00:41:36.175 clat percentiles (msec): 00:41:36.175 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 243], 00:41:36.175 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 275], 60.00th=[ 279], 00:41:36.175 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:41:36.175 | 99.00th=[ 359], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 388], 00:41:36.175 | 99.99th=[ 388] 00:41:36.175 bw ( KiB/s): min= 128, max= 384, per=3.82%, avg=236.80, stdev=59.78, samples=20 00:41:36.175 iops : min= 32, max= 96, avg=59.20, stdev=14.94, samples=20 00:41:36.175 lat (msec) : 250=20.07%, 500=79.93% 00:41:36.175 cpu : usr=97.47%, sys=1.65%, ctx=156, majf=0, minf=28 00:41:36.175 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:41:36.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.175 filename0: (groupid=0, jobs=1): err= 0: pid=1828103: Sun Oct 13 01:52:20 2024 00:41:36.175 read: IOPS=60, BW=242KiB/s (248kB/s)(2432KiB/10036msec) 00:41:36.175 slat (usec): min=9, max=119, avg=71.70, stdev=19.51 00:41:36.175 clat (msec): min=137, max=423, avg=263.52, stdev=47.72 00:41:36.175 lat (msec): min=137, max=423, avg=263.59, stdev=47.73 00:41:36.175 clat percentiles (msec): 00:41:36.175 | 1.00th=[ 138], 5.00th=[ 167], 10.00th=[ 186], 20.00th=[ 236], 00:41:36.175 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 275], 60.00th=[ 279], 00:41:36.175 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 296], 95.00th=[ 313], 00:41:36.175 | 99.00th=[ 401], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 422], 00:41:36.175 | 99.99th=[ 422] 00:41:36.175 bw ( KiB/s): min= 128, max= 384, per=3.82%, avg=236.80, stdev=59.55, samples=20 00:41:36.175 iops : min= 32, max= 96, avg=59.20, stdev=14.89, samples=20 00:41:36.175 lat (msec) : 250=21.71%, 500=78.29% 00:41:36.175 cpu : usr=97.83%, sys=1.49%, ctx=51, majf=0, minf=26 00:41:36.175 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:41:36.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.175 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.175 filename0: (groupid=0, jobs=1): err= 0: pid=1828104: Sun Oct 13 01:52:20 2024 00:41:36.175 read: IOPS=60, BW=241KiB/s (246kB/s)(2432KiB/10110msec) 00:41:36.175 slat (nsec): min=8883, max=87949, avg=33864.80, stdev=13032.43 00:41:36.175 clat (msec): min=134, max=387, avg=265.66, stdev=41.46 00:41:36.176 lat (msec): min=134, max=387, avg=265.69, stdev=41.46 00:41:36.176 clat percentiles (msec): 00:41:36.176 | 1.00th=[ 165], 5.00th=[ 184], 10.00th=[ 201], 20.00th=[ 232], 00:41:36.176 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:41:36.176 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 317], 00:41:36.176 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 388], 00:41:36.176 | 99.99th=[ 388] 00:41:36.176 bw ( KiB/s): min= 128, max= 256, per=3.82%, avg=236.80, stdev=42.68, samples=20 00:41:36.176 iops : min= 32, max= 64, avg=59.20, stdev=10.67, samples=20 00:41:36.176 lat (msec) : 250=21.38%, 500=78.62% 00:41:36.176 cpu : usr=98.45%, sys=1.15%, ctx=52, majf=0, minf=31 00:41:36.176 IO depths : 1=3.5%, 2=9.5%, 4=24.5%, 8=53.5%, 16=9.0%, 32=0.0%, >=64=0.0% 00:41:36.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.176 filename1: (groupid=0, jobs=1): err= 0: pid=1828105: Sun Oct 13 01:52:20 2024 00:41:36.176 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10100msec) 00:41:36.176 slat (usec): min=9, max=106, avg=31.81, stdev=17.83 00:41:36.176 clat (msec): min=135, max=426, avg=270.65, stdev=46.10 00:41:36.176 lat (msec): min=135, max=426, avg=270.68, stdev=46.09 00:41:36.176 clat percentiles (msec): 00:41:36.176 | 1.00th=[ 157], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 259], 00:41:36.176 | 30.00th=[ 266], 40.00th=[ 275], 50.00th=[ 275], 60.00th=[ 284], 00:41:36.176 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 305], 95.00th=[ 359], 00:41:36.176 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 426], 99.95th=[ 426], 00:41:36.176 | 99.99th=[ 426] 00:41:36.176 bw ( KiB/s): min= 128, max= 256, per=3.73%, avg=230.40, stdev=48.81, samples=20 00:41:36.176 iops : min= 32, max= 64, avg=57.60, stdev=12.20, samples=20 00:41:36.176 lat (msec) : 250=17.91%, 500=82.09% 00:41:36.176 cpu : usr=98.60%, sys=0.98%, ctx=15, majf=0, minf=34 00:41:36.176 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:36.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.176 filename1: (groupid=0, jobs=1): err= 0: pid=1828106: Sun Oct 13 01:52:20 2024 00:41:36.176 read: IOPS=83, BW=335KiB/s (343kB/s)(3392KiB/10133msec) 00:41:36.176 slat (nsec): min=5486, max=92154, avg=18123.88, stdev=18497.91 00:41:36.176 clat (msec): min=98, max=270, avg=191.02, stdev=29.26 00:41:36.176 lat (msec): min=98, max=270, avg=191.04, stdev=29.27 00:41:36.176 clat percentiles (msec): 00:41:36.176 | 1.00th=[ 100], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 171], 00:41:36.176 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 199], 00:41:36.176 | 70.00th=[ 207], 80.00th=[ 213], 90.00th=[ 228], 95.00th=[ 257], 00:41:36.176 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:41:36.176 | 99.99th=[ 271] 00:41:36.176 bw ( KiB/s): min= 256, max= 384, per=5.38%, avg=332.80, stdev=64.34, samples=20 00:41:36.176 iops : min= 64, max= 96, avg=83.20, stdev=16.08, samples=20 00:41:36.176 lat (msec) : 100=1.89%, 250=92.45%, 500=5.66% 00:41:36.176 cpu : usr=98.11%, sys=1.40%, ctx=39, majf=0, minf=37 00:41:36.176 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:36.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.176 filename1: (groupid=0, jobs=1): err= 0: pid=1828107: Sun Oct 13 01:52:20 2024 00:41:36.176 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10144msec) 00:41:36.176 slat (nsec): min=7025, max=64777, avg=27479.74, stdev=8365.53 00:41:36.176 clat (msec): min=96, max=387, avg=259.77, stdev=49.13 00:41:36.176 lat (msec): min=96, max=387, avg=259.80, stdev=49.13 00:41:36.176 clat percentiles (msec): 00:41:36.176 | 1.00th=[ 97], 5.00th=[ 163], 10.00th=[ 184], 20.00th=[ 218], 00:41:36.176 | 30.00th=[ 259], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:41:36.176 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 317], 00:41:36.176 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 388], 00:41:36.176 | 99.99th=[ 388] 00:41:36.176 bw ( KiB/s): min= 128, max= 368, per=3.94%, avg=243.20, stdev=53.85, samples=20 00:41:36.176 iops : min= 32, max= 92, avg=60.80, stdev=13.46, samples=20 00:41:36.176 lat (msec) : 100=2.24%, 250=21.47%, 500=76.28% 00:41:36.176 cpu : usr=98.05%, sys=1.32%, ctx=29, majf=0, minf=26 00:41:36.176 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:41:36.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.176 filename1: (groupid=0, jobs=1): err= 0: pid=1828108: Sun Oct 13 01:52:20 2024 00:41:36.176 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10105msec) 00:41:36.176 slat (nsec): min=8951, max=98365, avg=51621.19, stdev=25014.71 00:41:36.176 clat (msec): min=164, max=412, avg=272.61, stdev=38.57 00:41:36.176 lat (msec): min=164, max=412, avg=272.66, stdev=38.57 00:41:36.176 clat percentiles (msec): 00:41:36.176 | 1.00th=[ 165], 5.00th=[ 184], 10.00th=[ 218], 20.00th=[ 259], 00:41:36.176 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:41:36.176 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 317], 00:41:36.176 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:41:36.176 | 99.99th=[ 414] 00:41:36.176 bw ( KiB/s): min= 128, max= 256, per=3.73%, avg=230.40, stdev=52.53, samples=20 00:41:36.176 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:41:36.176 lat (msec) : 250=13.51%, 500=86.49% 00:41:36.176 cpu : usr=98.04%, sys=1.44%, ctx=45, majf=0, minf=30 00:41:36.176 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:41:36.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.176 filename1: (groupid=0, jobs=1): err= 0: pid=1828109: Sun Oct 13 01:52:20 2024 00:41:36.176 read: IOPS=60, BW=240KiB/s (246kB/s)(2432KiB/10123msec) 00:41:36.176 slat (usec): min=13, max=113, avg=73.28, stdev=16.57 00:41:36.176 clat (msec): min=117, max=405, avg=265.79, stdev=47.02 00:41:36.176 lat (msec): min=117, max=405, avg=265.86, stdev=47.03 00:41:36.176 clat percentiles (msec): 00:41:36.176 | 1.00th=[ 159], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 234], 00:41:36.176 | 30.00th=[ 262], 40.00th=[ 266], 50.00th=[ 275], 60.00th=[ 279], 00:41:36.176 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 351], 00:41:36.176 | 99.00th=[ 393], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:41:36.176 | 99.99th=[ 405] 00:41:36.176 bw ( KiB/s): min= 128, max= 368, per=3.82%, avg=236.80, stdev=57.71, samples=20 00:41:36.176 iops : min= 32, max= 92, avg=59.20, stdev=14.43, samples=20 00:41:36.176 lat (msec) : 250=22.70%, 500=77.30% 00:41:36.176 cpu : usr=98.05%, sys=1.26%, ctx=67, majf=0, minf=28 00:41:36.176 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:41:36.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.176 filename1: (groupid=0, jobs=1): err= 0: pid=1828110: Sun Oct 13 01:52:20 2024 00:41:36.176 read: IOPS=59, BW=240KiB/s (246kB/s)(2424KiB/10105msec) 00:41:36.176 slat (nsec): min=9350, max=78803, avg=31519.97, stdev=12041.69 00:41:36.176 clat (msec): min=157, max=387, avg=266.20, stdev=41.84 00:41:36.176 lat (msec): min=157, max=387, avg=266.24, stdev=41.84 00:41:36.176 clat percentiles (msec): 00:41:36.176 | 1.00th=[ 159], 5.00th=[ 171], 10.00th=[ 188], 20.00th=[ 257], 00:41:36.176 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:41:36.176 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 317], 00:41:36.176 | 99.00th=[ 363], 99.50th=[ 376], 99.90th=[ 388], 99.95th=[ 388], 00:41:36.176 | 99.99th=[ 388] 00:41:36.176 bw ( KiB/s): min= 128, max= 256, per=3.81%, avg=236.00, stdev=42.45, samples=20 00:41:36.176 iops : min= 32, max= 64, avg=59.00, stdev=10.61, samples=20 00:41:36.176 lat (msec) : 250=19.80%, 500=80.20% 00:41:36.176 cpu : usr=98.30%, sys=1.24%, ctx=27, majf=0, minf=30 00:41:36.176 IO depths : 1=3.3%, 2=9.6%, 4=25.1%, 8=53.0%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:36.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.176 filename1: (groupid=0, jobs=1): err= 0: pid=1828111: Sun Oct 13 01:52:20 2024 00:41:36.176 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10099msec) 00:41:36.176 slat (usec): min=18, max=118, avg=68.13, stdev=16.10 00:41:36.176 clat (msec): min=126, max=424, avg=270.34, stdev=46.48 00:41:36.176 lat (msec): min=126, max=424, avg=270.41, stdev=46.48 00:41:36.176 clat percentiles (msec): 00:41:36.176 | 1.00th=[ 153], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 259], 00:41:36.176 | 30.00th=[ 266], 40.00th=[ 275], 50.00th=[ 275], 60.00th=[ 284], 00:41:36.176 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 305], 95.00th=[ 359], 00:41:36.176 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 426], 99.95th=[ 426], 00:41:36.176 | 99.99th=[ 426] 00:41:36.176 bw ( KiB/s): min= 128, max= 256, per=3.73%, avg=230.40, stdev=44.25, samples=20 00:41:36.176 iops : min= 32, max= 64, avg=57.60, stdev=11.06, samples=20 00:41:36.176 lat (msec) : 250=17.91%, 500=82.09% 00:41:36.176 cpu : usr=97.88%, sys=1.49%, ctx=35, majf=0, minf=27 00:41:36.176 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:41:36.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.176 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.176 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.176 filename1: (groupid=0, jobs=1): err= 0: pid=1828112: Sun Oct 13 01:52:20 2024 00:41:36.176 read: IOPS=58, BW=234KiB/s (239kB/s)(2360KiB/10105msec) 00:41:36.176 slat (usec): min=16, max=102, avg=69.08, stdev=15.67 00:41:36.176 clat (msec): min=129, max=467, avg=273.09, stdev=41.13 00:41:36.176 lat (msec): min=129, max=467, avg=273.16, stdev=41.13 00:41:36.176 clat percentiles (msec): 00:41:36.177 | 1.00th=[ 165], 5.00th=[ 184], 10.00th=[ 218], 20.00th=[ 259], 00:41:36.177 | 30.00th=[ 271], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:41:36.177 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 317], 00:41:36.177 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 468], 99.95th=[ 468], 00:41:36.177 | 99.99th=[ 468] 00:41:36.177 bw ( KiB/s): min= 128, max= 256, per=3.71%, avg=229.60, stdev=50.40, samples=20 00:41:36.177 iops : min= 32, max= 64, avg=57.40, stdev=12.60, samples=20 00:41:36.177 lat (msec) : 250=14.24%, 500=85.76% 00:41:36.177 cpu : usr=98.07%, sys=1.33%, ctx=19, majf=0, minf=29 00:41:36.177 IO depths : 1=5.1%, 2=11.4%, 4=25.1%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:36.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.177 filename2: (groupid=0, jobs=1): err= 0: pid=1828113: Sun Oct 13 01:52:20 2024 00:41:36.177 read: IOPS=86, BW=347KiB/s (355kB/s)(3512KiB/10133msec) 00:41:36.177 slat (nsec): min=6936, max=85187, avg=15797.96, stdev=14747.72 00:41:36.177 clat (msec): min=98, max=238, avg=184.25, stdev=23.50 00:41:36.177 lat (msec): min=98, max=238, avg=184.26, stdev=23.50 00:41:36.177 clat percentiles (msec): 00:41:36.177 | 1.00th=[ 99], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 165], 00:41:36.177 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:41:36.177 | 70.00th=[ 201], 80.00th=[ 207], 90.00th=[ 213], 95.00th=[ 215], 00:41:36.177 | 99.00th=[ 230], 99.50th=[ 230], 99.90th=[ 239], 99.95th=[ 239], 00:41:36.177 | 99.99th=[ 239] 00:41:36.177 bw ( KiB/s): min= 256, max= 384, per=5.57%, avg=344.80, stdev=53.06, samples=20 00:41:36.177 iops : min= 64, max= 96, avg=86.20, stdev=13.26, samples=20 00:41:36.177 lat (msec) : 100=1.82%, 250=98.18% 00:41:36.177 cpu : usr=98.14%, sys=1.37%, ctx=28, majf=0, minf=39 00:41:36.177 IO depths : 1=1.0%, 2=7.3%, 4=25.1%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:41:36.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 issued rwts: total=878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.177 filename2: (groupid=0, jobs=1): err= 0: pid=1828114: Sun Oct 13 01:52:20 2024 00:41:36.177 read: IOPS=59, BW=236KiB/s (242kB/s)(2368KiB/10013msec) 00:41:36.177 slat (usec): min=9, max=115, avg=35.04, stdev=22.27 00:41:36.177 clat (msec): min=155, max=424, avg=270.37, stdev=38.44 00:41:36.177 lat (msec): min=156, max=425, avg=270.40, stdev=38.45 00:41:36.177 clat percentiles (msec): 00:41:36.177 | 1.00th=[ 167], 5.00th=[ 186], 10.00th=[ 209], 20.00th=[ 262], 00:41:36.177 | 30.00th=[ 266], 40.00th=[ 275], 50.00th=[ 275], 60.00th=[ 284], 00:41:36.177 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 313], 00:41:36.177 | 99.00th=[ 401], 99.50th=[ 409], 99.90th=[ 426], 99.95th=[ 426], 00:41:36.177 | 99.99th=[ 426] 00:41:36.177 bw ( KiB/s): min= 128, max= 272, per=3.73%, avg=230.40, stdev=50.97, samples=20 00:41:36.177 iops : min= 32, max= 68, avg=57.60, stdev=12.74, samples=20 00:41:36.177 lat (msec) : 250=16.22%, 500=83.78% 00:41:36.177 cpu : usr=98.25%, sys=1.27%, ctx=14, majf=0, minf=24 00:41:36.177 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:41:36.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.177 filename2: (groupid=0, jobs=1): err= 0: pid=1828115: Sun Oct 13 01:52:20 2024 00:41:36.177 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10099msec) 00:41:36.177 slat (usec): min=16, max=110, avg=73.51, stdev=16.45 00:41:36.177 clat (msec): min=125, max=424, avg=270.32, stdev=45.41 00:41:36.177 lat (msec): min=125, max=424, avg=270.39, stdev=45.41 00:41:36.177 clat percentiles (msec): 00:41:36.177 | 1.00th=[ 159], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 259], 00:41:36.177 | 30.00th=[ 266], 40.00th=[ 275], 50.00th=[ 275], 60.00th=[ 284], 00:41:36.177 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 359], 00:41:36.177 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 426], 99.95th=[ 426], 00:41:36.177 | 99.99th=[ 426] 00:41:36.177 bw ( KiB/s): min= 128, max= 256, per=3.73%, avg=230.40, stdev=48.81, samples=20 00:41:36.177 iops : min= 32, max= 64, avg=57.60, stdev=12.20, samples=20 00:41:36.177 lat (msec) : 250=17.91%, 500=82.09% 00:41:36.177 cpu : usr=97.99%, sys=1.34%, ctx=56, majf=0, minf=38 00:41:36.177 IO depths : 1=3.5%, 2=9.8%, 4=25.0%, 8=52.7%, 16=9.0%, 32=0.0%, >=64=0.0% 00:41:36.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.177 filename2: (groupid=0, jobs=1): err= 0: pid=1828116: Sun Oct 13 01:52:20 2024 00:41:36.177 read: IOPS=90, BW=363KiB/s (371kB/s)(3680KiB/10148msec) 00:41:36.177 slat (usec): min=4, max=103, avg=16.45, stdev=15.89 00:41:36.177 clat (msec): min=2, max=325, avg=175.56, stdev=56.40 00:41:36.177 lat (msec): min=2, max=325, avg=175.58, stdev=56.40 00:41:36.177 clat percentiles (msec): 00:41:36.177 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 107], 20.00th=[ 167], 00:41:36.177 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 192], 00:41:36.177 | 70.00th=[ 199], 80.00th=[ 213], 90.00th=[ 224], 95.00th=[ 234], 00:41:36.177 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 326], 99.95th=[ 326], 00:41:36.177 | 99.99th=[ 326] 00:41:36.177 bw ( KiB/s): min= 256, max= 896, per=5.85%, avg=361.60, stdev=135.70, samples=20 00:41:36.177 iops : min= 64, max= 224, avg=90.40, stdev=33.93, samples=20 00:41:36.177 lat (msec) : 4=3.48%, 10=0.76%, 20=0.98%, 50=1.74%, 100=1.96% 00:41:36.177 lat (msec) : 250=87.17%, 500=3.91% 00:41:36.177 cpu : usr=98.30%, sys=1.30%, ctx=21, majf=0, minf=45 00:41:36.177 IO depths : 1=0.5%, 2=5.9%, 4=21.8%, 8=59.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:41:36.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.177 filename2: (groupid=0, jobs=1): err= 0: pid=1828117: Sun Oct 13 01:52:20 2024 00:41:36.177 read: IOPS=61, BW=246KiB/s (252kB/s)(2488KiB/10130msec) 00:41:36.177 slat (nsec): min=8226, max=76063, avg=27713.36, stdev=10759.21 00:41:36.177 clat (msec): min=109, max=379, avg=260.05, stdev=48.74 00:41:36.177 lat (msec): min=109, max=379, avg=260.07, stdev=48.74 00:41:36.177 clat percentiles (msec): 00:41:36.177 | 1.00th=[ 134], 5.00th=[ 165], 10.00th=[ 182], 20.00th=[ 215], 00:41:36.177 | 30.00th=[ 259], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:41:36.177 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 305], 95.00th=[ 317], 00:41:36.177 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 380], 99.95th=[ 380], 00:41:36.177 | 99.99th=[ 380] 00:41:36.177 bw ( KiB/s): min= 128, max= 384, per=3.92%, avg=242.40, stdev=55.49, samples=20 00:41:36.177 iops : min= 32, max= 96, avg=60.60, stdev=13.87, samples=20 00:41:36.177 lat (msec) : 250=24.12%, 500=75.88% 00:41:36.177 cpu : usr=98.45%, sys=1.17%, ctx=13, majf=0, minf=28 00:41:36.177 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:41:36.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.177 filename2: (groupid=0, jobs=1): err= 0: pid=1828118: Sun Oct 13 01:52:20 2024 00:41:36.177 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10099msec) 00:41:36.177 slat (nsec): min=6586, max=90687, avg=28426.23, stdev=12188.20 00:41:36.177 clat (msec): min=134, max=426, avg=270.67, stdev=46.14 00:41:36.177 lat (msec): min=134, max=426, avg=270.70, stdev=46.13 00:41:36.177 clat percentiles (msec): 00:41:36.177 | 1.00th=[ 157], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 259], 00:41:36.177 | 30.00th=[ 266], 40.00th=[ 275], 50.00th=[ 275], 60.00th=[ 284], 00:41:36.177 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 305], 95.00th=[ 363], 00:41:36.177 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 426], 99.95th=[ 426], 00:41:36.177 | 99.99th=[ 426] 00:41:36.177 bw ( KiB/s): min= 128, max= 256, per=3.73%, avg=230.40, stdev=48.81, samples=20 00:41:36.177 iops : min= 32, max= 64, avg=57.60, stdev=12.20, samples=20 00:41:36.177 lat (msec) : 250=17.91%, 500=82.09% 00:41:36.177 cpu : usr=98.23%, sys=1.32%, ctx=100, majf=0, minf=24 00:41:36.177 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:41:36.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.177 filename2: (groupid=0, jobs=1): err= 0: pid=1828119: Sun Oct 13 01:52:20 2024 00:41:36.177 read: IOPS=60, BW=242KiB/s (248kB/s)(2432KiB/10037msec) 00:41:36.177 slat (usec): min=10, max=115, avg=65.04, stdev=23.19 00:41:36.177 clat (msec): min=117, max=423, avg=263.58, stdev=52.62 00:41:36.177 lat (msec): min=117, max=423, avg=263.65, stdev=52.64 00:41:36.177 clat percentiles (msec): 00:41:36.177 | 1.00th=[ 136], 5.00th=[ 165], 10.00th=[ 182], 20.00th=[ 224], 00:41:36.177 | 30.00th=[ 262], 40.00th=[ 268], 50.00th=[ 275], 60.00th=[ 279], 00:41:36.177 | 70.00th=[ 288], 80.00th=[ 292], 90.00th=[ 300], 95.00th=[ 338], 00:41:36.177 | 99.00th=[ 401], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 422], 00:41:36.177 | 99.99th=[ 422] 00:41:36.177 bw ( KiB/s): min= 128, max= 384, per=3.82%, avg=236.80, stdev=59.55, samples=20 00:41:36.177 iops : min= 32, max= 96, avg=59.20, stdev=14.89, samples=20 00:41:36.177 lat (msec) : 250=23.03%, 500=76.97% 00:41:36.177 cpu : usr=98.15%, sys=1.25%, ctx=19, majf=0, minf=46 00:41:36.177 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:41:36.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.177 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.177 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.177 filename2: (groupid=0, jobs=1): err= 0: pid=1828120: Sun Oct 13 01:52:20 2024 00:41:36.177 read: IOPS=61, BW=246KiB/s (252kB/s)(2496KiB/10139msec) 00:41:36.177 slat (usec): min=5, max=120, avg=71.58, stdev=18.71 00:41:36.177 clat (msec): min=97, max=372, avg=259.35, stdev=46.45 00:41:36.177 lat (msec): min=97, max=372, avg=259.42, stdev=46.46 00:41:36.177 clat percentiles (msec): 00:41:36.178 | 1.00th=[ 99], 5.00th=[ 163], 10.00th=[ 184], 20.00th=[ 236], 00:41:36.178 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 275], 60.00th=[ 275], 00:41:36.178 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 309], 00:41:36.178 | 99.00th=[ 334], 99.50th=[ 363], 99.90th=[ 372], 99.95th=[ 372], 00:41:36.178 | 99.99th=[ 372] 00:41:36.178 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=243.20, stdev=57.24, samples=20 00:41:36.178 iops : min= 32, max= 96, avg=60.80, stdev=14.31, samples=20 00:41:36.178 lat (msec) : 100=2.56%, 250=19.23%, 500=78.21% 00:41:36.178 cpu : usr=98.34%, sys=1.09%, ctx=40, majf=0, minf=28 00:41:36.178 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:41:36.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.178 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.178 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:36.178 00:41:36.178 Run status group 0 (all jobs): 00:41:36.178 READ: bw=6173KiB/s (6322kB/s), 234KiB/s-363KiB/s (239kB/s-371kB/s), io=61.2MiB (64.2MB), run=10013-10148msec 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 bdev_null0 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 [2024-10-13 01:52:20.508117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 bdev_null1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:36.178 { 00:41:36.178 "params": { 00:41:36.178 "name": "Nvme$subsystem", 00:41:36.178 "trtype": "$TEST_TRANSPORT", 00:41:36.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.178 "adrfam": "ipv4", 00:41:36.178 "trsvcid": "$NVMF_PORT", 00:41:36.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.178 "hdgst": ${hdgst:-false}, 00:41:36.178 "ddgst": ${ddgst:-false} 00:41:36.178 }, 00:41:36.178 "method": "bdev_nvme_attach_controller" 00:41:36.178 } 00:41:36.178 EOF 00:41:36.178 )") 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:36.178 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:36.179 { 00:41:36.179 "params": { 00:41:36.179 "name": "Nvme$subsystem", 00:41:36.179 "trtype": "$TEST_TRANSPORT", 00:41:36.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.179 "adrfam": "ipv4", 00:41:36.179 "trsvcid": "$NVMF_PORT", 00:41:36.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.179 "hdgst": ${hdgst:-false}, 00:41:36.179 "ddgst": ${ddgst:-false} 00:41:36.179 }, 00:41:36.179 "method": "bdev_nvme_attach_controller" 00:41:36.179 } 00:41:36.179 EOF 00:41:36.179 )") 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:36.179 "params": { 00:41:36.179 "name": "Nvme0", 00:41:36.179 "trtype": "tcp", 00:41:36.179 "traddr": "10.0.0.2", 00:41:36.179 "adrfam": "ipv4", 00:41:36.179 "trsvcid": "4420", 00:41:36.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:36.179 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:36.179 "hdgst": false, 00:41:36.179 "ddgst": false 00:41:36.179 }, 00:41:36.179 "method": "bdev_nvme_attach_controller" 00:41:36.179 },{ 00:41:36.179 "params": { 00:41:36.179 "name": "Nvme1", 00:41:36.179 "trtype": "tcp", 00:41:36.179 "traddr": "10.0.0.2", 00:41:36.179 "adrfam": "ipv4", 00:41:36.179 "trsvcid": "4420", 00:41:36.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:36.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:36.179 "hdgst": false, 00:41:36.179 "ddgst": false 00:41:36.179 }, 00:41:36.179 "method": "bdev_nvme_attach_controller" 00:41:36.179 }' 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:36.179 01:52:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.179 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:36.179 ... 00:41:36.179 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:36.179 ... 00:41:36.179 fio-3.35 00:41:36.179 Starting 4 threads 00:41:41.442 00:41:41.442 filename0: (groupid=0, jobs=1): err= 0: pid=1829618: Sun Oct 13 01:52:26 2024 00:41:41.442 read: IOPS=1793, BW=14.0MiB/s (14.7MB/s)(70.1MiB/5002msec) 00:41:41.442 slat (nsec): min=4333, max=80143, avg=18774.82, stdev=11330.20 00:41:41.442 clat (usec): min=1003, max=8201, avg=4397.54, stdev=417.10 00:41:41.442 lat (usec): min=1027, max=8223, avg=4416.31, stdev=417.76 00:41:41.442 clat percentiles (usec): 00:41:41.442 | 1.00th=[ 3064], 5.00th=[ 3785], 10.00th=[ 4015], 20.00th=[ 4228], 00:41:41.442 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4490], 00:41:41.442 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4883], 00:41:41.442 | 99.00th=[ 5669], 99.50th=[ 6390], 99.90th=[ 7635], 99.95th=[ 7898], 00:41:41.442 | 99.99th=[ 8225] 00:41:41.442 bw ( KiB/s): min=14080, max=14704, per=25.13%, avg=14344.00, stdev=192.04, samples=10 00:41:41.442 iops : min= 1760, max= 1838, avg=1793.00, stdev=24.00, samples=10 00:41:41.442 lat (msec) : 2=0.20%, 4=8.83%, 10=90.97% 00:41:41.442 cpu : usr=94.06%, sys=4.70%, ctx=104, majf=0, minf=120 00:41:41.442 IO depths : 1=0.4%, 2=13.6%, 4=58.8%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:41.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.442 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.442 issued rwts: total=8973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.442 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:41.442 filename0: (groupid=0, jobs=1): err= 0: pid=1829619: Sun Oct 13 01:52:26 2024 00:41:41.442 read: IOPS=1772, BW=13.8MiB/s (14.5MB/s)(69.3MiB/5004msec) 00:41:41.442 slat (nsec): min=4409, max=88758, avg=22808.69, stdev=12048.82 00:41:41.442 clat (usec): min=913, max=7921, avg=4424.34, stdev=540.71 00:41:41.442 lat (usec): min=926, max=7935, avg=4447.15, stdev=540.94 00:41:41.442 clat percentiles (usec): 00:41:41.442 | 1.00th=[ 2606], 5.00th=[ 3851], 10.00th=[ 4080], 20.00th=[ 4228], 00:41:41.442 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4424], 00:41:41.442 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5145], 00:41:41.442 | 99.00th=[ 6849], 99.50th=[ 7308], 99.90th=[ 7701], 99.95th=[ 7832], 00:41:41.442 | 99.99th=[ 7898] 00:41:41.442 bw ( KiB/s): min=13920, max=14640, per=24.86%, avg=14188.80, stdev=203.06, samples=10 00:41:41.442 iops : min= 1740, max= 1830, avg=1773.60, stdev=25.38, samples=10 00:41:41.442 lat (usec) : 1000=0.02% 00:41:41.442 lat (msec) : 2=0.46%, 4=7.23%, 10=92.29% 00:41:41.442 cpu : usr=94.76%, sys=4.74%, ctx=7, majf=0, minf=84 00:41:41.442 IO depths : 1=1.2%, 2=18.8%, 4=54.8%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:41.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.442 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.442 issued rwts: total=8871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.442 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:41.442 filename1: (groupid=0, jobs=1): err= 0: pid=1829620: Sun Oct 13 01:52:26 2024 00:41:41.442 read: IOPS=1777, BW=13.9MiB/s (14.6MB/s)(69.5MiB/5003msec) 00:41:41.442 slat (nsec): min=4745, max=80667, avg=23355.16, stdev=12292.98 00:41:41.442 clat (usec): min=835, max=8199, avg=4411.12, stdev=577.46 00:41:41.442 lat (usec): min=848, max=8207, avg=4434.47, stdev=577.87 00:41:41.442 clat percentiles (usec): 00:41:41.442 | 1.00th=[ 2245], 5.00th=[ 3818], 10.00th=[ 4080], 20.00th=[ 4228], 00:41:41.442 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4359], 60.00th=[ 4424], 00:41:41.442 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5145], 00:41:41.442 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 7832], 99.95th=[ 7963], 00:41:41.442 | 99.99th=[ 8225] 00:41:41.442 bw ( KiB/s): min=14000, max=14448, per=24.90%, avg=14211.50, stdev=169.40, samples=10 00:41:41.442 iops : min= 1750, max= 1806, avg=1776.40, stdev=21.12, samples=10 00:41:41.442 lat (usec) : 1000=0.11% 00:41:41.442 lat (msec) : 2=0.80%, 4=6.76%, 10=92.33% 00:41:41.442 cpu : usr=96.08%, sys=3.40%, ctx=12, majf=0, minf=77 00:41:41.442 IO depths : 1=0.1%, 2=20.5%, 4=53.2%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:41.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.442 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.442 issued rwts: total=8892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.442 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:41.442 filename1: (groupid=0, jobs=1): err= 0: pid=1829621: Sun Oct 13 01:52:26 2024 00:41:41.442 read: IOPS=1792, BW=14.0MiB/s (14.7MB/s)(70.0MiB/5002msec) 00:41:41.442 slat (nsec): min=5046, max=81499, avg=22789.48, stdev=12150.57 00:41:41.442 clat (usec): min=836, max=8189, avg=4377.11, stdev=478.63 00:41:41.442 lat (usec): min=851, max=8197, avg=4399.90, stdev=479.31 00:41:41.442 clat percentiles (usec): 00:41:41.442 | 1.00th=[ 2966], 5.00th=[ 3752], 10.00th=[ 4015], 20.00th=[ 4228], 00:41:41.442 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4359], 60.00th=[ 4424], 00:41:41.442 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4817], 00:41:41.443 | 99.00th=[ 6194], 99.50th=[ 7046], 99.90th=[ 7963], 99.95th=[ 8029], 00:41:41.443 | 99.99th=[ 8160] 00:41:41.443 bw ( KiB/s): min=14080, max=14688, per=25.11%, avg=14335.60, stdev=198.13, samples=10 00:41:41.443 iops : min= 1760, max= 1836, avg=1791.90, stdev=24.73, samples=10 00:41:41.443 lat (usec) : 1000=0.03% 00:41:41.443 lat (msec) : 2=0.37%, 4=9.27%, 10=90.33% 00:41:41.443 cpu : usr=94.40%, sys=4.40%, ctx=109, majf=0, minf=104 00:41:41.443 IO depths : 1=1.4%, 2=19.5%, 4=54.2%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:41.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.443 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:41.443 issued rwts: total=8966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:41.443 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:41.443 00:41:41.443 Run status group 0 (all jobs): 00:41:41.443 READ: bw=55.7MiB/s (58.4MB/s), 13.8MiB/s-14.0MiB/s (14.5MB/s-14.7MB/s), io=279MiB (292MB), run=5002-5004msec 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.443 00:41:41.443 real 0m24.349s 00:41:41.443 user 4m35.853s 00:41:41.443 sys 0m5.751s 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 ************************************ 00:41:41.443 END TEST fio_dif_rand_params 00:41:41.443 ************************************ 00:41:41.443 01:52:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:41.443 01:52:26 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:41.443 01:52:26 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 ************************************ 00:41:41.443 START TEST fio_dif_digest 00:41:41.443 ************************************ 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 bdev_null0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:41.443 [2024-10-13 01:52:26.992960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:41:41.443 { 00:41:41.443 "params": { 00:41:41.443 "name": "Nvme$subsystem", 00:41:41.443 "trtype": "$TEST_TRANSPORT", 00:41:41.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:41.443 "adrfam": "ipv4", 00:41:41.443 "trsvcid": "$NVMF_PORT", 00:41:41.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:41.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:41.443 "hdgst": ${hdgst:-false}, 00:41:41.443 "ddgst": ${ddgst:-false} 00:41:41.443 }, 00:41:41.443 "method": "bdev_nvme_attach_controller" 00:41:41.443 } 00:41:41.443 EOF 00:41:41.443 )") 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:41:41.443 01:52:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:41.443 01:52:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:41.443 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:41.443 01:52:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:41.443 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:41:41.443 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:41.443 01:52:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:41:41.443 01:52:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:41:41.443 01:52:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:41:41.443 "params": { 00:41:41.443 "name": "Nvme0", 00:41:41.443 "trtype": "tcp", 00:41:41.443 "traddr": "10.0.0.2", 00:41:41.443 "adrfam": "ipv4", 00:41:41.443 "trsvcid": "4420", 00:41:41.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:41.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:41.443 "hdgst": true, 00:41:41.443 "ddgst": true 00:41:41.443 }, 00:41:41.443 "method": "bdev_nvme_attach_controller" 00:41:41.443 }' 00:41:41.701 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:41.701 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:41.701 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:41.701 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:41.701 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:41:41.701 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:41.701 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:41:41.701 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:41:41.702 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:41.702 01:52:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:41.702 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:41.702 ... 00:41:41.702 fio-3.35 00:41:41.702 Starting 3 threads 00:41:53.948 00:41:53.948 filename0: (groupid=0, jobs=1): err= 0: pid=1830373: Sun Oct 13 01:52:37 2024 00:41:53.948 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(279MiB/10049msec) 00:41:53.948 slat (nsec): min=4898, max=40375, avg=15671.01, stdev=3370.59 00:41:53.948 clat (usec): min=9620, max=54905, avg=13453.77, stdev=3610.85 00:41:53.948 lat (usec): min=9649, max=54920, avg=13469.44, stdev=3610.82 00:41:53.948 clat percentiles (usec): 00:41:53.948 | 1.00th=[10683], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:41:53.948 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:41:53.948 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:41:53.948 | 99.00th=[16450], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:41:53.948 | 99.99th=[54789] 00:41:53.948 bw ( KiB/s): min=26368, max=30464, per=37.76%, avg=28569.60, stdev=1190.36, samples=20 00:41:53.948 iops : min= 206, max= 238, avg=223.20, stdev= 9.30, samples=20 00:41:53.948 lat (msec) : 10=0.13%, 20=99.02%, 50=0.13%, 100=0.72% 00:41:53.948 cpu : usr=93.79%, sys=5.71%, ctx=14, majf=0, minf=138 00:41:53.948 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.948 issued rwts: total=2234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.948 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:53.948 filename0: (groupid=0, jobs=1): err= 0: pid=1830374: Sun Oct 13 01:52:37 2024 00:41:53.948 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(224MiB/10045msec) 00:41:53.948 slat (nsec): min=4731, max=68158, avg=17060.21, stdev=5027.17 00:41:53.948 clat (usec): min=8187, max=53644, avg=16743.79, stdev=2134.92 00:41:53.948 lat (usec): min=8202, max=53659, avg=16760.85, stdev=2135.15 00:41:53.948 clat percentiles (usec): 00:41:53.948 | 1.00th=[ 9372], 5.00th=[14353], 10.00th=[15139], 20.00th=[15795], 00:41:53.948 | 30.00th=[16188], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:41:53.948 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[19268], 00:41:53.948 | 99.00th=[20317], 99.50th=[21103], 99.90th=[49546], 99.95th=[53740], 00:41:53.948 | 99.99th=[53740] 00:41:53.948 bw ( KiB/s): min=21760, max=24576, per=30.33%, avg=22950.40, stdev=806.45, samples=20 00:41:53.948 iops : min= 170, max= 192, avg=179.30, stdev= 6.30, samples=20 00:41:53.948 lat (msec) : 10=2.12%, 20=96.38%, 50=1.45%, 100=0.06% 00:41:53.948 cpu : usr=89.55%, sys=7.43%, ctx=339, majf=0, minf=113 00:41:53.948 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.948 issued rwts: total=1795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.948 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:53.948 filename0: (groupid=0, jobs=1): err= 0: pid=1830375: Sun Oct 13 01:52:37 2024 00:41:53.948 read: IOPS=190, BW=23.8MiB/s (24.9MB/s)(239MiB/10046msec) 00:41:53.948 slat (nsec): min=5207, max=30794, avg=15214.06, stdev=1413.83 00:41:53.948 clat (usec): min=8600, max=49037, avg=15730.88, stdev=1816.08 00:41:53.948 lat (usec): min=8614, max=49052, avg=15746.10, stdev=1816.08 00:41:53.948 clat percentiles (usec): 00:41:53.948 | 1.00th=[ 9634], 5.00th=[13829], 10.00th=[14353], 20.00th=[14877], 00:41:53.948 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:41:53.948 | 70.00th=[16319], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:41:53.948 | 99.00th=[18744], 99.50th=[19268], 99.90th=[45876], 99.95th=[49021], 00:41:53.948 | 99.99th=[49021] 00:41:53.948 bw ( KiB/s): min=22528, max=25600, per=32.28%, avg=24422.40, stdev=903.76, samples=20 00:41:53.948 iops : min= 176, max= 200, avg=190.80, stdev= 7.06, samples=20 00:41:53.948 lat (msec) : 10=1.94%, 20=97.75%, 50=0.31% 00:41:53.948 cpu : usr=94.17%, sys=5.35%, ctx=22, majf=0, minf=141 00:41:53.948 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:53.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:53.948 issued rwts: total=1911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:53.948 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:53.948 00:41:53.948 Run status group 0 (all jobs): 00:41:53.948 READ: bw=73.9MiB/s (77.5MB/s), 22.3MiB/s-27.8MiB/s (23.4MB/s-29.1MB/s), io=743MiB (779MB), run=10045-10049msec 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:53.948 00:41:53.948 real 0m11.122s 00:41:53.948 user 0m28.925s 00:41:53.948 sys 0m2.125s 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:53.948 01:52:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:53.948 ************************************ 00:41:53.948 END TEST fio_dif_digest 00:41:53.948 ************************************ 00:41:53.948 01:52:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:41:53.948 01:52:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:53.948 rmmod nvme_tcp 00:41:53.948 rmmod nvme_fabrics 00:41:53.948 rmmod nvme_keyring 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1824215 ']' 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1824215 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1824215 ']' 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1824215 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1824215 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1824215' 00:41:53.948 killing process with pid 1824215 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1824215 00:41:53.948 01:52:38 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1824215 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:41:53.948 01:52:38 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:41:53.948 Waiting for block devices as requested 00:41:53.948 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:41:54.207 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:54.207 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:54.207 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:54.464 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:54.464 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:54.464 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:54.465 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:54.723 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:54.723 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:41:54.723 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:41:54.723 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:41:54.981 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:41:54.981 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:41:54.981 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:41:55.239 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:41:55.239 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:41:55.239 01:52:40 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:41:55.239 01:52:40 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:41:55.239 01:52:40 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:41:55.239 01:52:40 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:41:55.239 01:52:40 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:41:55.239 01:52:40 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:41:55.239 01:52:40 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:55.239 01:52:40 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:55.239 01:52:40 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:55.239 01:52:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:55.239 01:52:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:57.769 01:52:42 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:57.769 00:41:57.769 real 1m6.884s 00:41:57.769 user 6m31.533s 00:41:57.769 sys 0m17.168s 00:41:57.769 01:52:42 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:57.769 01:52:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:57.769 ************************************ 00:41:57.769 END TEST nvmf_dif 00:41:57.769 ************************************ 00:41:57.769 01:52:42 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:57.769 01:52:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:41:57.769 01:52:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:57.769 01:52:42 -- common/autotest_common.sh@10 -- # set +x 00:41:57.769 ************************************ 00:41:57.769 START TEST nvmf_abort_qd_sizes 00:41:57.769 ************************************ 00:41:57.769 01:52:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:41:57.769 * Looking for test storage... 00:41:57.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:57.769 01:52:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:41:57.769 01:52:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:41:57.769 01:52:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:41:57.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.769 --rc genhtml_branch_coverage=1 00:41:57.769 --rc genhtml_function_coverage=1 00:41:57.769 --rc genhtml_legend=1 00:41:57.769 --rc geninfo_all_blocks=1 00:41:57.769 --rc geninfo_unexecuted_blocks=1 00:41:57.769 00:41:57.769 ' 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:41:57.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.769 --rc genhtml_branch_coverage=1 00:41:57.769 --rc genhtml_function_coverage=1 00:41:57.769 --rc genhtml_legend=1 00:41:57.769 --rc geninfo_all_blocks=1 00:41:57.769 --rc geninfo_unexecuted_blocks=1 00:41:57.769 00:41:57.769 ' 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:41:57.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.769 --rc genhtml_branch_coverage=1 00:41:57.769 --rc genhtml_function_coverage=1 00:41:57.769 --rc genhtml_legend=1 00:41:57.769 --rc geninfo_all_blocks=1 00:41:57.769 --rc geninfo_unexecuted_blocks=1 00:41:57.769 00:41:57.769 ' 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:41:57.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:57.769 --rc genhtml_branch_coverage=1 00:41:57.769 --rc genhtml_function_coverage=1 00:41:57.769 --rc genhtml_legend=1 00:41:57.769 --rc geninfo_all_blocks=1 00:41:57.769 --rc geninfo_unexecuted_blocks=1 00:41:57.769 00:41:57.769 ' 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:57.769 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:57.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:41:57.770 01:52:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:59.671 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:59.671 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:59.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:59.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:59.671 01:52:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:59.671 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:59.671 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:59.671 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:59.671 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:59.671 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:59.671 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:59.671 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:59.671 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:59.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:59.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:41:59.671 00:41:59.671 --- 10.0.0.2 ping statistics --- 00:41:59.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.671 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:41:59.671 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:59.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:59.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:41:59.671 00:41:59.671 --- 10.0.0.1 ping statistics --- 00:41:59.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:59.672 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:41:59.672 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:59.672 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:41:59.672 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:41:59.672 01:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:01.046 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:01.046 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:01.046 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:01.046 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:01.046 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:01.046 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:01.046 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:01.046 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:01.046 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:01.046 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:01.046 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:01.046 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:01.046 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:01.046 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:01.046 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:01.046 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:01.980 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1835169 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1835169 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1835169 ']' 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:01.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:01.980 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:01.980 [2024-10-13 01:52:47.443277] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:42:01.980 [2024-10-13 01:52:47.443364] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:01.980 [2024-10-13 01:52:47.514210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:02.238 [2024-10-13 01:52:47.567723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:02.238 [2024-10-13 01:52:47.567778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:02.238 [2024-10-13 01:52:47.567794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:02.238 [2024-10-13 01:52:47.567808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:02.238 [2024-10-13 01:52:47.567819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:02.238 [2024-10-13 01:52:47.569416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:02.238 [2024-10-13 01:52:47.569496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:02.238 [2024-10-13 01:52:47.569592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:02.238 [2024-10-13 01:52:47.569596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:02.238 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:02.239 01:52:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:02.239 ************************************ 00:42:02.239 START TEST spdk_target_abort 00:42:02.239 ************************************ 00:42:02.239 01:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:42:02.239 01:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:02.239 01:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:02.239 01:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:02.239 01:52:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:05.514 spdk_targetn1 00:42:05.514 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.514 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:05.515 [2024-10-13 01:52:50.578634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:05.515 [2024-10-13 01:52:50.633912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:05.515 01:52:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:08.795 Initializing NVMe Controllers 00:42:08.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:08.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:08.795 Initialization complete. Launching workers. 00:42:08.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12218, failed: 0 00:42:08.795 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1204, failed to submit 11014 00:42:08.795 success 780, unsuccessful 424, failed 0 00:42:08.795 01:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:08.795 01:52:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:12.073 [2024-10-13 01:52:57.037949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16310e0 is same with the state(6) to be set 00:42:12.073 Initializing NVMe Controllers 00:42:12.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:12.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:12.073 Initialization complete. Launching workers. 00:42:12.073 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8617, failed: 0 00:42:12.073 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7399 00:42:12.073 success 348, unsuccessful 870, failed 0 00:42:12.073 01:52:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:12.073 01:52:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:15.351 Initializing NVMe Controllers 00:42:15.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:15.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:15.351 Initialization complete. Launching workers. 00:42:15.351 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30979, failed: 0 00:42:15.351 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2720, failed to submit 28259 00:42:15.351 success 524, unsuccessful 2196, failed 0 00:42:15.351 01:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:15.351 01:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:15.351 01:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:15.351 01:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:15.351 01:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:15.351 01:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:15.351 01:53:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1835169 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1835169 ']' 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1835169 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1835169 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1835169' 00:42:16.283 killing process with pid 1835169 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1835169 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1835169 00:42:16.283 00:42:16.283 real 0m14.022s 00:42:16.283 user 0m53.174s 00:42:16.283 sys 0m2.476s 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:16.283 ************************************ 00:42:16.283 END TEST spdk_target_abort 00:42:16.283 ************************************ 00:42:16.283 01:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:16.283 01:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:16.283 01:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:16.283 01:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:16.283 ************************************ 00:42:16.283 START TEST kernel_target_abort 00:42:16.283 ************************************ 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:16.283 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:16.284 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:42:16.284 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:16.284 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:16.284 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:16.284 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:42:16.284 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:42:16.284 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:42:16.284 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:16.284 01:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:17.658 Waiting for block devices as requested 00:42:17.658 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:17.658 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:17.658 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:17.915 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:17.915 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:17.915 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:17.915 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:18.173 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:18.173 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:18.173 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:18.173 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:18.431 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:18.431 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:18.431 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:18.431 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:18.689 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:18.689 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:18.689 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:42:18.689 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:18.689 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:42:18.689 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:42:18.689 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:18.689 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:42:18.689 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:42:18.689 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:18.689 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:18.947 No valid GPT data, bailing 00:42:18.947 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:18.947 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:18.947 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:18.947 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:18.948 00:42:18.948 Discovery Log Number of Records 2, Generation counter 2 00:42:18.948 =====Discovery Log Entry 0====== 00:42:18.948 trtype: tcp 00:42:18.948 adrfam: ipv4 00:42:18.948 subtype: current discovery subsystem 00:42:18.948 treq: not specified, sq flow control disable supported 00:42:18.948 portid: 1 00:42:18.948 trsvcid: 4420 00:42:18.948 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:18.948 traddr: 10.0.0.1 00:42:18.948 eflags: none 00:42:18.948 sectype: none 00:42:18.948 =====Discovery Log Entry 1====== 00:42:18.948 trtype: tcp 00:42:18.948 adrfam: ipv4 00:42:18.948 subtype: nvme subsystem 00:42:18.948 treq: not specified, sq flow control disable supported 00:42:18.948 portid: 1 00:42:18.948 trsvcid: 4420 00:42:18.948 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:18.948 traddr: 10.0.0.1 00:42:18.948 eflags: none 00:42:18.948 sectype: none 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:18.948 01:53:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:22.227 Initializing NVMe Controllers 00:42:22.227 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:22.228 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:22.228 Initialization complete. Launching workers. 00:42:22.228 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48790, failed: 0 00:42:22.228 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48790, failed to submit 0 00:42:22.228 success 0, unsuccessful 48790, failed 0 00:42:22.228 01:53:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:22.228 01:53:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:25.534 Initializing NVMe Controllers 00:42:25.534 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:25.534 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:25.534 Initialization complete. Launching workers. 00:42:25.534 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80739, failed: 0 00:42:25.534 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20366, failed to submit 60373 00:42:25.534 success 0, unsuccessful 20366, failed 0 00:42:25.534 01:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:25.534 01:53:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:28.821 Initializing NVMe Controllers 00:42:28.821 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:28.821 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:28.821 Initialization complete. Launching workers. 00:42:28.821 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84389, failed: 0 00:42:28.821 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21086, failed to submit 63303 00:42:28.821 success 0, unsuccessful 21086, failed 0 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:42:28.821 01:53:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:29.387 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:29.387 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:29.387 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:29.387 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:29.387 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:29.387 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:29.387 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:29.387 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:29.387 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:29.669 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:29.669 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:29.669 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:29.669 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:29.669 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:29.669 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:29.669 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:30.648 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:30.648 00:42:30.648 real 0m14.218s 00:42:30.648 user 0m6.649s 00:42:30.648 sys 0m3.111s 00:42:30.648 01:53:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:30.648 01:53:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:30.648 ************************************ 00:42:30.648 END TEST kernel_target_abort 00:42:30.648 ************************************ 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:30.648 rmmod nvme_tcp 00:42:30.648 rmmod nvme_fabrics 00:42:30.648 rmmod nvme_keyring 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1835169 ']' 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1835169 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1835169 ']' 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1835169 00:42:30.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1835169) - No such process 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1835169 is not found' 00:42:30.648 Process with pid 1835169 is not found 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:42:30.648 01:53:16 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:31.582 Waiting for block devices as requested 00:42:31.840 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:31.840 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:31.840 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:32.098 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:32.098 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:32.098 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:32.098 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:32.356 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:32.356 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:32.356 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:32.356 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:32.613 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:32.613 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:32.613 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:32.613 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:32.870 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:32.870 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:32.870 01:53:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:35.399 01:53:20 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:35.399 00:42:35.399 real 0m37.550s 00:42:35.399 user 1m1.962s 00:42:35.399 sys 0m9.013s 00:42:35.399 01:53:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:35.399 01:53:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:35.399 ************************************ 00:42:35.399 END TEST nvmf_abort_qd_sizes 00:42:35.399 ************************************ 00:42:35.399 01:53:20 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:35.399 01:53:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:35.399 01:53:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:35.399 01:53:20 -- common/autotest_common.sh@10 -- # set +x 00:42:35.399 ************************************ 00:42:35.399 START TEST keyring_file 00:42:35.399 ************************************ 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:42:35.399 * Looking for test storage... 00:42:35.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:35.399 01:53:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:35.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.399 --rc genhtml_branch_coverage=1 00:42:35.399 --rc genhtml_function_coverage=1 00:42:35.399 --rc genhtml_legend=1 00:42:35.399 --rc geninfo_all_blocks=1 00:42:35.399 --rc geninfo_unexecuted_blocks=1 00:42:35.399 00:42:35.399 ' 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:35.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.399 --rc genhtml_branch_coverage=1 00:42:35.399 --rc genhtml_function_coverage=1 00:42:35.399 --rc genhtml_legend=1 00:42:35.399 --rc geninfo_all_blocks=1 00:42:35.399 --rc geninfo_unexecuted_blocks=1 00:42:35.399 00:42:35.399 ' 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:35.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.399 --rc genhtml_branch_coverage=1 00:42:35.399 --rc genhtml_function_coverage=1 00:42:35.399 --rc genhtml_legend=1 00:42:35.399 --rc geninfo_all_blocks=1 00:42:35.399 --rc geninfo_unexecuted_blocks=1 00:42:35.399 00:42:35.399 ' 00:42:35.399 01:53:20 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:35.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:35.399 --rc genhtml_branch_coverage=1 00:42:35.399 --rc genhtml_function_coverage=1 00:42:35.399 --rc genhtml_legend=1 00:42:35.399 --rc geninfo_all_blocks=1 00:42:35.399 --rc geninfo_unexecuted_blocks=1 00:42:35.399 00:42:35.399 ' 00:42:35.399 01:53:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:35.399 01:53:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:35.399 01:53:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:35.400 01:53:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:42:35.400 01:53:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:35.400 01:53:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:35.400 01:53:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:35.400 01:53:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.400 01:53:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.400 01:53:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.400 01:53:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:42:35.400 01:53:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:35.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QgN7GnUkts 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QgN7GnUkts 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QgN7GnUkts 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.QgN7GnUkts 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5me8QmrWNR 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:35.400 01:53:20 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5me8QmrWNR 00:42:35.400 01:53:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5me8QmrWNR 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5me8QmrWNR 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=1840919 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:35.400 01:53:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1840919 00:42:35.400 01:53:20 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1840919 ']' 00:42:35.400 01:53:20 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:35.400 01:53:20 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:35.400 01:53:20 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:35.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:35.400 01:53:20 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:35.400 01:53:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:35.400 [2024-10-13 01:53:20.761962] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:42:35.400 [2024-10-13 01:53:20.762065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1840919 ] 00:42:35.400 [2024-10-13 01:53:20.824748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:35.400 [2024-10-13 01:53:20.873760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:35.658 01:53:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:35.658 [2024-10-13 01:53:21.141096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:35.658 null0 00:42:35.658 [2024-10-13 01:53:21.173160] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:35.658 [2024-10-13 01:53:21.173713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:35.658 01:53:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:35.658 01:53:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:35.658 [2024-10-13 01:53:21.201194] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:42:35.658 request: 00:42:35.658 { 00:42:35.658 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:42:35.658 "secure_channel": false, 00:42:35.658 "listen_address": { 00:42:35.658 "trtype": "tcp", 00:42:35.658 "traddr": "127.0.0.1", 00:42:35.658 "trsvcid": "4420" 00:42:35.658 }, 00:42:35.658 "method": "nvmf_subsystem_add_listener", 00:42:35.658 "req_id": 1 00:42:35.658 } 00:42:35.658 Got JSON-RPC error response 00:42:35.658 response: 00:42:35.659 { 00:42:35.659 "code": -32602, 00:42:35.659 "message": "Invalid parameters" 00:42:35.659 } 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:35.659 01:53:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=1840935 00:42:35.659 01:53:21 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:42:35.659 01:53:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1840935 /var/tmp/bperf.sock 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1840935 ']' 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:35.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:35.659 01:53:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:35.917 [2024-10-13 01:53:21.251559] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:42:35.917 [2024-10-13 01:53:21.251639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1840935 ] 00:42:35.917 [2024-10-13 01:53:21.312429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:35.917 [2024-10-13 01:53:21.362176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:35.917 01:53:21 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:35.917 01:53:21 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:35.917 01:53:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QgN7GnUkts 00:42:35.917 01:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QgN7GnUkts 00:42:36.481 01:53:21 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5me8QmrWNR 00:42:36.481 01:53:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5me8QmrWNR 00:42:36.481 01:53:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:42:36.481 01:53:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:42:36.481 01:53:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:36.481 01:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:36.481 01:53:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:36.739 01:53:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.QgN7GnUkts == \/\t\m\p\/\t\m\p\.\Q\g\N\7\G\n\U\k\t\s ]] 00:42:36.739 01:53:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:42:36.739 01:53:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:42:36.739 01:53:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:36.739 01:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:36.739 01:53:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:37.304 01:53:22 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.5me8QmrWNR == \/\t\m\p\/\t\m\p\.\5\m\e\8\Q\m\r\W\N\R ]] 00:42:37.304 01:53:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:37.304 01:53:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:42:37.304 01:53:22 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:37.304 01:53:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:37.562 01:53:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:42:37.562 01:53:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:37.562 01:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:37.819 [2024-10-13 01:53:23.391857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:38.076 nvme0n1 00:42:38.076 01:53:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:42:38.076 01:53:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:38.076 01:53:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:38.076 01:53:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:38.076 01:53:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:38.076 01:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:38.334 01:53:23 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:42:38.334 01:53:23 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:42:38.334 01:53:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:38.334 01:53:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:38.334 01:53:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:38.334 01:53:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:38.334 01:53:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:38.591 01:53:24 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:42:38.591 01:53:24 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:38.591 Running I/O for 1 seconds... 00:42:39.963 9562.00 IOPS, 37.35 MiB/s 00:42:39.963 Latency(us) 00:42:39.963 [2024-10-12T23:53:25.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:39.963 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:42:39.963 nvme0n1 : 1.01 9614.17 37.56 0.00 0.00 13272.78 4490.43 21068.61 00:42:39.963 [2024-10-12T23:53:25.541Z] =================================================================================================================== 00:42:39.963 [2024-10-12T23:53:25.541Z] Total : 9614.17 37.56 0.00 0.00 13272.78 4490.43 21068.61 00:42:39.963 { 00:42:39.963 "results": [ 00:42:39.963 { 00:42:39.963 "job": "nvme0n1", 00:42:39.963 "core_mask": "0x2", 00:42:39.963 "workload": "randrw", 00:42:39.963 "percentage": 50, 00:42:39.963 "status": "finished", 00:42:39.963 "queue_depth": 128, 00:42:39.963 "io_size": 4096, 00:42:39.963 "runtime": 1.007887, 00:42:39.963 "iops": 9614.173017411675, 00:42:39.963 "mibps": 37.555363349264354, 00:42:39.963 "io_failed": 0, 00:42:39.963 "io_timeout": 0, 00:42:39.963 "avg_latency_us": 13272.775903680773, 00:42:39.963 "min_latency_us": 4490.42962962963, 00:42:39.963 "max_latency_us": 21068.61037037037 00:42:39.963 } 00:42:39.963 ], 00:42:39.963 "core_count": 1 00:42:39.963 } 00:42:39.963 01:53:25 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:39.963 01:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:39.963 01:53:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:42:39.963 01:53:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:39.963 01:53:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:39.963 01:53:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:39.963 01:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:39.963 01:53:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.221 01:53:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:42:40.221 01:53:25 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:42:40.221 01:53:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:40.221 01:53:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.221 01:53:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.221 01:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.221 01:53:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:40.479 01:53:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:42:40.479 01:53:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:40.479 01:53:25 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:40.479 01:53:25 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:40.479 01:53:25 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:40.479 01:53:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:40.479 01:53:25 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:40.479 01:53:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:40.479 01:53:25 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:40.479 01:53:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:42:40.736 [2024-10-13 01:53:26.242216] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:40.736 [2024-10-13 01:53:26.242868] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2364c00 (107): Transport endpoint is not connected 00:42:40.736 [2024-10-13 01:53:26.243863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2364c00 (9): Bad file descriptor 00:42:40.736 [2024-10-13 01:53:26.244861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:40.736 [2024-10-13 01:53:26.244884] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:40.736 [2024-10-13 01:53:26.244900] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:40.736 [2024-10-13 01:53:26.244917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:40.736 request: 00:42:40.736 { 00:42:40.736 "name": "nvme0", 00:42:40.736 "trtype": "tcp", 00:42:40.736 "traddr": "127.0.0.1", 00:42:40.736 "adrfam": "ipv4", 00:42:40.736 "trsvcid": "4420", 00:42:40.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:40.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:40.736 "prchk_reftag": false, 00:42:40.736 "prchk_guard": false, 00:42:40.736 "hdgst": false, 00:42:40.736 "ddgst": false, 00:42:40.736 "psk": "key1", 00:42:40.736 "allow_unrecognized_csi": false, 00:42:40.736 "method": "bdev_nvme_attach_controller", 00:42:40.736 "req_id": 1 00:42:40.736 } 00:42:40.736 Got JSON-RPC error response 00:42:40.736 response: 00:42:40.736 { 00:42:40.736 "code": -5, 00:42:40.736 "message": "Input/output error" 00:42:40.736 } 00:42:40.736 01:53:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:40.736 01:53:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:40.736 01:53:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:40.736 01:53:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:40.736 01:53:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:42:40.736 01:53:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:40.736 01:53:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.736 01:53:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.736 01:53:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.736 01:53:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:40.994 01:53:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:42:40.994 01:53:26 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:42:40.994 01:53:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:40.994 01:53:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:40.994 01:53:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:40.994 01:53:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:40.994 01:53:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:41.251 01:53:26 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:42:41.251 01:53:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:42:41.251 01:53:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:41.509 01:53:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:42:41.509 01:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:42:42.075 01:53:27 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:42:42.075 01:53:27 keyring_file -- keyring/file.sh@78 -- # jq length 00:42:42.075 01:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:42.075 01:53:27 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:42:42.075 01:53:27 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.QgN7GnUkts 00:42:42.075 01:53:27 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.QgN7GnUkts 00:42:42.075 01:53:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:42.075 01:53:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.QgN7GnUkts 00:42:42.075 01:53:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:42.075 01:53:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:42.075 01:53:27 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:42.075 01:53:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:42.075 01:53:27 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QgN7GnUkts 00:42:42.075 01:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QgN7GnUkts 00:42:42.332 [2024-10-13 01:53:27.893777] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.QgN7GnUkts': 0100660 00:42:42.332 [2024-10-13 01:53:27.893816] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:42:42.332 request: 00:42:42.332 { 00:42:42.332 "name": "key0", 00:42:42.332 "path": "/tmp/tmp.QgN7GnUkts", 00:42:42.332 "method": "keyring_file_add_key", 00:42:42.332 "req_id": 1 00:42:42.332 } 00:42:42.332 Got JSON-RPC error response 00:42:42.332 response: 00:42:42.332 { 00:42:42.332 "code": -1, 00:42:42.332 "message": "Operation not permitted" 00:42:42.332 } 00:42:42.589 01:53:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:42.589 01:53:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:42.589 01:53:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:42.589 01:53:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:42.589 01:53:27 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.QgN7GnUkts 00:42:42.589 01:53:27 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.QgN7GnUkts 00:42:42.589 01:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.QgN7GnUkts 00:42:42.846 01:53:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.QgN7GnUkts 00:42:42.846 01:53:28 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:42:42.846 01:53:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:42.846 01:53:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:42.846 01:53:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:42.846 01:53:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:42.846 01:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:43.104 01:53:28 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:42:43.104 01:53:28 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:43.104 01:53:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:42:43.104 01:53:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:43.104 01:53:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:43.104 01:53:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:43.104 01:53:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:43.104 01:53:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:43.104 01:53:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:43.104 01:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:43.361 [2024-10-13 01:53:28.728083] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.QgN7GnUkts': No such file or directory 00:42:43.361 [2024-10-13 01:53:28.728121] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:42:43.361 [2024-10-13 01:53:28.728149] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:42:43.361 [2024-10-13 01:53:28.728164] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:42:43.361 [2024-10-13 01:53:28.728179] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:42:43.361 [2024-10-13 01:53:28.728192] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:42:43.361 request: 00:42:43.361 { 00:42:43.361 "name": "nvme0", 00:42:43.361 "trtype": "tcp", 00:42:43.361 "traddr": "127.0.0.1", 00:42:43.361 "adrfam": "ipv4", 00:42:43.361 "trsvcid": "4420", 00:42:43.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:43.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:43.361 "prchk_reftag": false, 00:42:43.361 "prchk_guard": false, 00:42:43.361 "hdgst": false, 00:42:43.361 "ddgst": false, 00:42:43.361 "psk": "key0", 00:42:43.361 "allow_unrecognized_csi": false, 00:42:43.361 "method": "bdev_nvme_attach_controller", 00:42:43.361 "req_id": 1 00:42:43.361 } 00:42:43.361 Got JSON-RPC error response 00:42:43.361 response: 00:42:43.361 { 00:42:43.361 "code": -19, 00:42:43.361 "message": "No such device" 00:42:43.361 } 00:42:43.361 01:53:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:42:43.361 01:53:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:43.361 01:53:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:43.361 01:53:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:43.361 01:53:28 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:42:43.361 01:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:43.619 01:53:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.B7KR9wEw3R 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:43.619 01:53:29 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:43.619 01:53:29 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:42:43.619 01:53:29 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:43.619 01:53:29 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:43.619 01:53:29 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:42:43.619 01:53:29 keyring_file -- nvmf/common.sh@731 -- # python - 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.B7KR9wEw3R 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.B7KR9wEw3R 00:42:43.619 01:53:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.B7KR9wEw3R 00:42:43.619 01:53:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.B7KR9wEw3R 00:42:43.619 01:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.B7KR9wEw3R 00:42:43.877 01:53:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:43.877 01:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:44.134 nvme0n1 00:42:44.134 01:53:29 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:42:44.134 01:53:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:44.134 01:53:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:44.134 01:53:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:44.134 01:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.134 01:53:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:44.392 01:53:29 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:42:44.392 01:53:29 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:42:44.392 01:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:42:44.958 01:53:30 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:42:44.958 01:53:30 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:42:44.958 01:53:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:44.958 01:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:44.958 01:53:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:44.958 01:53:30 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:42:44.958 01:53:30 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:42:44.958 01:53:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:44.958 01:53:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:44.958 01:53:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:44.958 01:53:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:44.958 01:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:45.216 01:53:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:42:45.216 01:53:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:45.216 01:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:45.782 01:53:31 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:42:45.782 01:53:31 keyring_file -- keyring/file.sh@105 -- # jq length 00:42:45.782 01:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:45.782 01:53:31 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:42:45.782 01:53:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.B7KR9wEw3R 00:42:45.782 01:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.B7KR9wEw3R 00:42:46.040 01:53:31 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5me8QmrWNR 00:42:46.040 01:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5me8QmrWNR 00:42:46.298 01:53:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:46.298 01:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:42:46.863 nvme0n1 00:42:46.863 01:53:32 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:42:46.863 01:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:42:47.121 01:53:32 keyring_file -- keyring/file.sh@113 -- # config='{ 00:42:47.121 "subsystems": [ 00:42:47.121 { 00:42:47.121 "subsystem": "keyring", 00:42:47.121 "config": [ 00:42:47.121 { 00:42:47.121 "method": "keyring_file_add_key", 00:42:47.121 "params": { 00:42:47.121 "name": "key0", 00:42:47.121 "path": "/tmp/tmp.B7KR9wEw3R" 00:42:47.121 } 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "method": "keyring_file_add_key", 00:42:47.121 "params": { 00:42:47.121 "name": "key1", 00:42:47.121 "path": "/tmp/tmp.5me8QmrWNR" 00:42:47.121 } 00:42:47.121 } 00:42:47.121 ] 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "subsystem": "iobuf", 00:42:47.121 "config": [ 00:42:47.121 { 00:42:47.121 "method": "iobuf_set_options", 00:42:47.121 "params": { 00:42:47.121 "small_pool_count": 8192, 00:42:47.121 "large_pool_count": 1024, 00:42:47.121 "small_bufsize": 8192, 00:42:47.121 "large_bufsize": 135168 00:42:47.121 } 00:42:47.121 } 00:42:47.121 ] 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "subsystem": "sock", 00:42:47.121 "config": [ 00:42:47.121 { 00:42:47.121 "method": "sock_set_default_impl", 00:42:47.121 "params": { 00:42:47.121 "impl_name": "posix" 00:42:47.121 } 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "method": "sock_impl_set_options", 00:42:47.121 "params": { 00:42:47.121 "impl_name": "ssl", 00:42:47.121 "recv_buf_size": 4096, 00:42:47.121 "send_buf_size": 4096, 00:42:47.121 "enable_recv_pipe": true, 00:42:47.121 "enable_quickack": false, 00:42:47.121 "enable_placement_id": 0, 00:42:47.121 "enable_zerocopy_send_server": true, 00:42:47.121 "enable_zerocopy_send_client": false, 00:42:47.121 "zerocopy_threshold": 0, 00:42:47.121 "tls_version": 0, 00:42:47.121 "enable_ktls": false 00:42:47.121 } 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "method": "sock_impl_set_options", 00:42:47.121 "params": { 00:42:47.121 "impl_name": "posix", 00:42:47.121 "recv_buf_size": 2097152, 00:42:47.121 "send_buf_size": 2097152, 00:42:47.121 "enable_recv_pipe": true, 00:42:47.121 "enable_quickack": false, 00:42:47.121 "enable_placement_id": 0, 00:42:47.121 "enable_zerocopy_send_server": true, 00:42:47.121 "enable_zerocopy_send_client": false, 00:42:47.121 "zerocopy_threshold": 0, 00:42:47.121 "tls_version": 0, 00:42:47.121 "enable_ktls": false 00:42:47.121 } 00:42:47.121 } 00:42:47.121 ] 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "subsystem": "vmd", 00:42:47.121 "config": [] 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "subsystem": "accel", 00:42:47.121 "config": [ 00:42:47.121 { 00:42:47.121 "method": "accel_set_options", 00:42:47.121 "params": { 00:42:47.121 "small_cache_size": 128, 00:42:47.121 "large_cache_size": 16, 00:42:47.121 "task_count": 2048, 00:42:47.121 "sequence_count": 2048, 00:42:47.121 "buf_count": 2048 00:42:47.121 } 00:42:47.121 } 00:42:47.121 ] 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "subsystem": "bdev", 00:42:47.121 "config": [ 00:42:47.121 { 00:42:47.121 "method": "bdev_set_options", 00:42:47.121 "params": { 00:42:47.121 "bdev_io_pool_size": 65535, 00:42:47.121 "bdev_io_cache_size": 256, 00:42:47.121 "bdev_auto_examine": true, 00:42:47.121 "iobuf_small_cache_size": 128, 00:42:47.121 "iobuf_large_cache_size": 16 00:42:47.121 } 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "method": "bdev_raid_set_options", 00:42:47.121 "params": { 00:42:47.121 "process_window_size_kb": 1024, 00:42:47.121 "process_max_bandwidth_mb_sec": 0 00:42:47.121 } 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "method": "bdev_iscsi_set_options", 00:42:47.121 "params": { 00:42:47.121 "timeout_sec": 30 00:42:47.121 } 00:42:47.121 }, 00:42:47.121 { 00:42:47.121 "method": "bdev_nvme_set_options", 00:42:47.121 "params": { 00:42:47.121 "action_on_timeout": "none", 00:42:47.121 "timeout_us": 0, 00:42:47.121 "timeout_admin_us": 0, 00:42:47.121 "keep_alive_timeout_ms": 10000, 00:42:47.121 "arbitration_burst": 0, 00:42:47.121 "low_priority_weight": 0, 00:42:47.121 "medium_priority_weight": 0, 00:42:47.121 "high_priority_weight": 0, 00:42:47.121 "nvme_adminq_poll_period_us": 10000, 00:42:47.121 "nvme_ioq_poll_period_us": 0, 00:42:47.121 "io_queue_requests": 512, 00:42:47.121 "delay_cmd_submit": true, 00:42:47.121 "transport_retry_count": 4, 00:42:47.121 "bdev_retry_count": 3, 00:42:47.121 "transport_ack_timeout": 0, 00:42:47.121 "ctrlr_loss_timeout_sec": 0, 00:42:47.122 "reconnect_delay_sec": 0, 00:42:47.122 "fast_io_fail_timeout_sec": 0, 00:42:47.122 "disable_auto_failback": false, 00:42:47.122 "generate_uuids": false, 00:42:47.122 "transport_tos": 0, 00:42:47.122 "nvme_error_stat": false, 00:42:47.122 "rdma_srq_size": 0, 00:42:47.122 "io_path_stat": false, 00:42:47.122 "allow_accel_sequence": false, 00:42:47.122 "rdma_max_cq_size": 0, 00:42:47.122 "rdma_cm_event_timeout_ms": 0, 00:42:47.122 "dhchap_digests": [ 00:42:47.122 "sha256", 00:42:47.122 "sha384", 00:42:47.122 "sha512" 00:42:47.122 ], 00:42:47.122 "dhchap_dhgroups": [ 00:42:47.122 "null", 00:42:47.122 "ffdhe2048", 00:42:47.122 "ffdhe3072", 00:42:47.122 "ffdhe4096", 00:42:47.122 "ffdhe6144", 00:42:47.122 "ffdhe8192" 00:42:47.122 ] 00:42:47.122 } 00:42:47.122 }, 00:42:47.122 { 00:42:47.122 "method": "bdev_nvme_attach_controller", 00:42:47.122 "params": { 00:42:47.122 "name": "nvme0", 00:42:47.122 "trtype": "TCP", 00:42:47.122 "adrfam": "IPv4", 00:42:47.122 "traddr": "127.0.0.1", 00:42:47.122 "trsvcid": "4420", 00:42:47.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:47.122 "prchk_reftag": false, 00:42:47.122 "prchk_guard": false, 00:42:47.122 "ctrlr_loss_timeout_sec": 0, 00:42:47.122 "reconnect_delay_sec": 0, 00:42:47.122 "fast_io_fail_timeout_sec": 0, 00:42:47.122 "psk": "key0", 00:42:47.122 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:47.122 "hdgst": false, 00:42:47.122 "ddgst": false, 00:42:47.122 "multipath": "multipath" 00:42:47.122 } 00:42:47.122 }, 00:42:47.122 { 00:42:47.122 "method": "bdev_nvme_set_hotplug", 00:42:47.122 "params": { 00:42:47.122 "period_us": 100000, 00:42:47.122 "enable": false 00:42:47.122 } 00:42:47.122 }, 00:42:47.122 { 00:42:47.122 "method": "bdev_wait_for_examine" 00:42:47.122 } 00:42:47.122 ] 00:42:47.122 }, 00:42:47.122 { 00:42:47.122 "subsystem": "nbd", 00:42:47.122 "config": [] 00:42:47.122 } 00:42:47.122 ] 00:42:47.122 }' 00:42:47.122 01:53:32 keyring_file -- keyring/file.sh@115 -- # killprocess 1840935 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1840935 ']' 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1840935 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1840935 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1840935' 00:42:47.122 killing process with pid 1840935 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@969 -- # kill 1840935 00:42:47.122 Received shutdown signal, test time was about 1.000000 seconds 00:42:47.122 00:42:47.122 Latency(us) 00:42:47.122 [2024-10-12T23:53:32.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:47.122 [2024-10-12T23:53:32.700Z] =================================================================================================================== 00:42:47.122 [2024-10-12T23:53:32.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:47.122 01:53:32 keyring_file -- common/autotest_common.sh@974 -- # wait 1840935 00:42:47.380 01:53:32 keyring_file -- keyring/file.sh@118 -- # bperfpid=1842395 00:42:47.380 01:53:32 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1842395 /var/tmp/bperf.sock 00:42:47.380 01:53:32 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1842395 ']' 00:42:47.380 01:53:32 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:47.380 01:53:32 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:42:47.380 01:53:32 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:47.380 01:53:32 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:47.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:47.380 01:53:32 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:42:47.380 "subsystems": [ 00:42:47.380 { 00:42:47.380 "subsystem": "keyring", 00:42:47.380 "config": [ 00:42:47.380 { 00:42:47.380 "method": "keyring_file_add_key", 00:42:47.380 "params": { 00:42:47.380 "name": "key0", 00:42:47.380 "path": "/tmp/tmp.B7KR9wEw3R" 00:42:47.380 } 00:42:47.380 }, 00:42:47.380 { 00:42:47.380 "method": "keyring_file_add_key", 00:42:47.380 "params": { 00:42:47.380 "name": "key1", 00:42:47.380 "path": "/tmp/tmp.5me8QmrWNR" 00:42:47.380 } 00:42:47.380 } 00:42:47.380 ] 00:42:47.380 }, 00:42:47.380 { 00:42:47.380 "subsystem": "iobuf", 00:42:47.381 "config": [ 00:42:47.381 { 00:42:47.381 "method": "iobuf_set_options", 00:42:47.381 "params": { 00:42:47.381 "small_pool_count": 8192, 00:42:47.381 "large_pool_count": 1024, 00:42:47.381 "small_bufsize": 8192, 00:42:47.381 "large_bufsize": 135168 00:42:47.381 } 00:42:47.381 } 00:42:47.381 ] 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "subsystem": "sock", 00:42:47.381 "config": [ 00:42:47.381 { 00:42:47.381 "method": "sock_set_default_impl", 00:42:47.381 "params": { 00:42:47.381 "impl_name": "posix" 00:42:47.381 } 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "method": "sock_impl_set_options", 00:42:47.381 "params": { 00:42:47.381 "impl_name": "ssl", 00:42:47.381 "recv_buf_size": 4096, 00:42:47.381 "send_buf_size": 4096, 00:42:47.381 "enable_recv_pipe": true, 00:42:47.381 "enable_quickack": false, 00:42:47.381 "enable_placement_id": 0, 00:42:47.381 "enable_zerocopy_send_server": true, 00:42:47.381 "enable_zerocopy_send_client": false, 00:42:47.381 "zerocopy_threshold": 0, 00:42:47.381 "tls_version": 0, 00:42:47.381 "enable_ktls": false 00:42:47.381 } 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "method": "sock_impl_set_options", 00:42:47.381 "params": { 00:42:47.381 "impl_name": "posix", 00:42:47.381 "recv_buf_size": 2097152, 00:42:47.381 "send_buf_size": 2097152, 00:42:47.381 "enable_recv_pipe": true, 00:42:47.381 "enable_quickack": false, 00:42:47.381 "enable_placement_id": 0, 00:42:47.381 "enable_zerocopy_send_server": true, 00:42:47.381 "enable_zerocopy_send_client": false, 00:42:47.381 "zerocopy_threshold": 0, 00:42:47.381 "tls_version": 0, 00:42:47.381 "enable_ktls": false 00:42:47.381 } 00:42:47.381 } 00:42:47.381 ] 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "subsystem": "vmd", 00:42:47.381 "config": [] 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "subsystem": "accel", 00:42:47.381 "config": [ 00:42:47.381 { 00:42:47.381 "method": "accel_set_options", 00:42:47.381 "params": { 00:42:47.381 "small_cache_size": 128, 00:42:47.381 "large_cache_size": 16, 00:42:47.381 "task_count": 2048, 00:42:47.381 "sequence_count": 2048, 00:42:47.381 "buf_count": 2048 00:42:47.381 } 00:42:47.381 } 00:42:47.381 ] 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "subsystem": "bdev", 00:42:47.381 "config": [ 00:42:47.381 { 00:42:47.381 "method": "bdev_set_options", 00:42:47.381 "params": { 00:42:47.381 "bdev_io_pool_size": 65535, 00:42:47.381 "bdev_io_cache_size": 256, 00:42:47.381 "bdev_auto_examine": true, 00:42:47.381 "iobuf_small_cache_size": 128, 00:42:47.381 "iobuf_large_cache_size": 16 00:42:47.381 } 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "method": "bdev_raid_set_options", 00:42:47.381 "params": { 00:42:47.381 "process_window_size_kb": 1024, 00:42:47.381 "process_max_bandwidth_mb_sec": 0 00:42:47.381 } 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "method": "bdev_iscsi_set_options", 00:42:47.381 "params": { 00:42:47.381 "timeout_sec": 30 00:42:47.381 } 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "method": "bdev_nvme_set_options", 00:42:47.381 "params": { 00:42:47.381 "action_on_timeout": "none", 00:42:47.381 "timeout_us": 0, 00:42:47.381 "timeout_admin_us": 0, 00:42:47.381 "keep_alive_timeout_ms": 10000, 00:42:47.381 "arbitration_burst": 0, 00:42:47.381 "low_priority_weight": 0, 00:42:47.381 "medium_priority_weight": 0, 00:42:47.381 "high_priority_weight": 0, 00:42:47.381 "nvme_adminq_poll_period_us": 10000, 00:42:47.381 "nvme_ioq_poll_period_us": 0, 00:42:47.381 "io_queue_requests": 512, 00:42:47.381 "delay_cmd_submit": true, 00:42:47.381 "transport_retry_count": 4, 00:42:47.381 "bdev_retry_count": 3, 00:42:47.381 "transport_ack_timeout": 0, 00:42:47.381 "ctrlr_loss_timeout_sec": 0, 00:42:47.381 "reconnect_delay_sec": 0, 00:42:47.381 "fast_io_fail_timeout_sec": 0, 00:42:47.381 "disable_auto_failback": false, 00:42:47.381 "generate_uuids": false, 00:42:47.381 "transport_tos": 0, 00:42:47.381 "nvme_error_stat": false, 00:42:47.381 "rdma_srq_size": 0, 00:42:47.381 "io_path_stat": false, 00:42:47.381 "allow_accel_sequence": false, 00:42:47.381 "rdma_max_cq_size": 0, 00:42:47.381 "rdma_cm_event_timeout_ms": 0, 00:42:47.381 "dhchap_digests": [ 00:42:47.381 "sha256", 00:42:47.381 "sha384", 00:42:47.381 "sha512" 00:42:47.381 ], 00:42:47.381 "dhchap_dhgroups": [ 00:42:47.381 "null", 00:42:47.381 "ffdhe2048", 00:42:47.381 "ffdhe3072", 00:42:47.381 "ffdhe4096", 00:42:47.381 "ffdhe6144", 00:42:47.381 "ffdhe8192" 00:42:47.381 ] 00:42:47.381 } 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "method": "bdev_nvme_attach_controller", 00:42:47.381 "params": { 00:42:47.381 "name": "nvme0", 00:42:47.381 "trtype": "TCP", 00:42:47.381 "adrfam": "IPv4", 00:42:47.381 "traddr": "127.0.0.1", 00:42:47.381 "trsvcid": "4420", 00:42:47.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:47.381 "prchk_reftag": false, 00:42:47.381 "prchk_guard": false, 00:42:47.381 "ctrlr_loss_timeout_sec": 0, 00:42:47.381 "reconnect_delay_sec": 0, 00:42:47.381 "fast_io_fail_timeout_sec": 0, 00:42:47.381 "psk": "key0", 00:42:47.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:47.381 "hdgst": false, 00:42:47.381 "ddgst": false, 00:42:47.381 "multipath": "multipath" 00:42:47.381 } 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "method": "bdev_nvme_set_hotplug", 00:42:47.381 "params": { 00:42:47.381 "period_us": 100000, 00:42:47.381 "enable": false 00:42:47.381 } 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "method": "bdev_wait_for_examine" 00:42:47.381 } 00:42:47.381 ] 00:42:47.381 }, 00:42:47.381 { 00:42:47.381 "subsystem": "nbd", 00:42:47.381 "config": [] 00:42:47.381 } 00:42:47.381 ] 00:42:47.381 }' 00:42:47.381 01:53:32 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:47.381 01:53:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:47.381 [2024-10-13 01:53:32.781676] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:42:47.381 [2024-10-13 01:53:32.781775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842395 ] 00:42:47.381 [2024-10-13 01:53:32.841453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:47.381 [2024-10-13 01:53:32.888863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:47.640 [2024-10-13 01:53:33.076353] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:47.640 01:53:33 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:47.640 01:53:33 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:42:47.640 01:53:33 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:42:47.640 01:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.640 01:53:33 keyring_file -- keyring/file.sh@121 -- # jq length 00:42:47.897 01:53:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:42:47.897 01:53:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:42:47.897 01:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:42:47.897 01:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:47.897 01:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:47.897 01:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:47.897 01:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:42:48.155 01:53:33 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:42:48.155 01:53:33 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:42:48.155 01:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:42:48.155 01:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:42:48.155 01:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:48.155 01:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:48.155 01:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:42:48.720 01:53:34 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:42:48.720 01:53:34 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:42:48.720 01:53:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:42:48.720 01:53:34 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:42:48.720 01:53:34 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:42:48.720 01:53:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:42:48.720 01:53:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.B7KR9wEw3R /tmp/tmp.5me8QmrWNR 00:42:48.720 01:53:34 keyring_file -- keyring/file.sh@20 -- # killprocess 1842395 00:42:48.720 01:53:34 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1842395 ']' 00:42:48.720 01:53:34 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1842395 00:42:48.720 01:53:34 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:48.720 01:53:34 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:48.720 01:53:34 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1842395 00:42:48.978 01:53:34 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1842395' 00:42:48.979 killing process with pid 1842395 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@969 -- # kill 1842395 00:42:48.979 Received shutdown signal, test time was about 1.000000 seconds 00:42:48.979 00:42:48.979 Latency(us) 00:42:48.979 [2024-10-12T23:53:34.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:48.979 [2024-10-12T23:53:34.557Z] =================================================================================================================== 00:42:48.979 [2024-10-12T23:53:34.557Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@974 -- # wait 1842395 00:42:48.979 01:53:34 keyring_file -- keyring/file.sh@21 -- # killprocess 1840919 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1840919 ']' 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1840919 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@955 -- # uname 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1840919 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1840919' 00:42:48.979 killing process with pid 1840919 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@969 -- # kill 1840919 00:42:48.979 01:53:34 keyring_file -- common/autotest_common.sh@974 -- # wait 1840919 00:42:49.544 00:42:49.544 real 0m14.437s 00:42:49.544 user 0m37.073s 00:42:49.544 sys 0m3.105s 00:42:49.544 01:53:34 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:49.544 01:53:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:42:49.544 ************************************ 00:42:49.544 END TEST keyring_file 00:42:49.544 ************************************ 00:42:49.544 01:53:34 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:42:49.544 01:53:34 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:49.544 01:53:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:49.544 01:53:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:49.544 01:53:34 -- common/autotest_common.sh@10 -- # set +x 00:42:49.544 ************************************ 00:42:49.544 START TEST keyring_linux 00:42:49.544 ************************************ 00:42:49.544 01:53:34 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:42:49.544 Joined session keyring: 688515238 00:42:49.544 * Looking for test storage... 00:42:49.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:42:49.544 01:53:35 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:42:49.544 01:53:35 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:42:49.544 01:53:35 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:42:49.544 01:53:35 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:42:49.544 01:53:35 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:49.544 01:53:35 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:49.544 01:53:35 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:49.544 01:53:35 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:42:49.544 01:53:35 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:42:49.544 01:53:35 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:42:49.544 01:53:35 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@345 -- # : 1 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@368 -- # return 0 00:42:49.545 01:53:35 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:49.545 01:53:35 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:42:49.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.545 --rc genhtml_branch_coverage=1 00:42:49.545 --rc genhtml_function_coverage=1 00:42:49.545 --rc genhtml_legend=1 00:42:49.545 --rc geninfo_all_blocks=1 00:42:49.545 --rc geninfo_unexecuted_blocks=1 00:42:49.545 00:42:49.545 ' 00:42:49.545 01:53:35 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:42:49.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.545 --rc genhtml_branch_coverage=1 00:42:49.545 --rc genhtml_function_coverage=1 00:42:49.545 --rc genhtml_legend=1 00:42:49.545 --rc geninfo_all_blocks=1 00:42:49.545 --rc geninfo_unexecuted_blocks=1 00:42:49.545 00:42:49.545 ' 00:42:49.545 01:53:35 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:42:49.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.545 --rc genhtml_branch_coverage=1 00:42:49.545 --rc genhtml_function_coverage=1 00:42:49.545 --rc genhtml_legend=1 00:42:49.545 --rc geninfo_all_blocks=1 00:42:49.545 --rc geninfo_unexecuted_blocks=1 00:42:49.545 00:42:49.545 ' 00:42:49.545 01:53:35 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:42:49.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:49.545 --rc genhtml_branch_coverage=1 00:42:49.545 --rc genhtml_function_coverage=1 00:42:49.545 --rc genhtml_legend=1 00:42:49.545 --rc geninfo_all_blocks=1 00:42:49.545 --rc geninfo_unexecuted_blocks=1 00:42:49.545 00:42:49.545 ' 00:42:49.545 01:53:35 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:42:49.545 01:53:35 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:49.545 01:53:35 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:49.545 01:53:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:49.545 01:53:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:49.545 01:53:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:49.545 01:53:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:42:49.545 01:53:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:49.545 01:53:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:49.803 01:53:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:49.803 01:53:35 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:49.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:49.803 01:53:35 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@731 -- # python - 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:42:49.804 /tmp/:spdk-test:key0 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:42:49.804 01:53:35 keyring_linux -- nvmf/common.sh@731 -- # python - 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:42:49.804 01:53:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:42:49.804 /tmp/:spdk-test:key1 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1842754 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:42:49.804 01:53:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1842754 00:42:49.804 01:53:35 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1842754 ']' 00:42:49.804 01:53:35 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:49.804 01:53:35 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:49.804 01:53:35 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:49.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:49.804 01:53:35 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:49.804 01:53:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:49.804 [2024-10-13 01:53:35.267737] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:42:49.804 [2024-10-13 01:53:35.267844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842754 ] 00:42:49.804 [2024-10-13 01:53:35.331374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:50.062 [2024-10-13 01:53:35.384351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:50.320 01:53:35 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:50.320 [2024-10-13 01:53:35.657972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:50.320 null0 00:42:50.320 [2024-10-13 01:53:35.690024] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:42:50.320 [2024-10-13 01:53:35.690569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:50.320 01:53:35 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:42:50.320 457135699 00:42:50.320 01:53:35 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:42:50.320 71284944 00:42:50.320 01:53:35 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1842884 00:42:50.320 01:53:35 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:42:50.320 01:53:35 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1842884 /var/tmp/bperf.sock 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1842884 ']' 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:42:50.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:50.320 01:53:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:50.320 [2024-10-13 01:53:35.758992] Starting SPDK v25.01-pre git sha1 bbce7a874 / DPDK 22.11.4 initialization... 00:42:50.320 [2024-10-13 01:53:35.759080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1842884 ] 00:42:50.320 [2024-10-13 01:53:35.819513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:50.320 [2024-10-13 01:53:35.869014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:50.577 01:53:35 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:50.577 01:53:35 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:42:50.577 01:53:35 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:42:50.577 01:53:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:42:50.835 01:53:36 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:42:50.835 01:53:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:42:51.094 01:53:36 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:51.094 01:53:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:42:51.351 [2024-10-13 01:53:36.874635] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:42:51.609 nvme0n1 00:42:51.609 01:53:36 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:42:51.609 01:53:36 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:42:51.610 01:53:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:51.610 01:53:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:51.610 01:53:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.610 01:53:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:51.867 01:53:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:42:51.868 01:53:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:51.868 01:53:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:42:51.868 01:53:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:42:51.868 01:53:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:42:51.868 01:53:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:51.868 01:53:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:42:52.126 01:53:37 keyring_linux -- keyring/linux.sh@25 -- # sn=457135699 00:42:52.126 01:53:37 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:42:52.126 01:53:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:52.126 01:53:37 keyring_linux -- keyring/linux.sh@26 -- # [[ 457135699 == \4\5\7\1\3\5\6\9\9 ]] 00:42:52.126 01:53:37 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 457135699 00:42:52.126 01:53:37 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:42:52.126 01:53:37 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:42:52.126 Running I/O for 1 seconds... 00:42:53.061 9387.00 IOPS, 36.67 MiB/s 00:42:53.061 Latency(us) 00:42:53.061 [2024-10-12T23:53:38.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:53.061 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:42:53.061 nvme0n1 : 1.01 9393.06 36.69 0.00 0.00 13531.62 10243.03 24078.41 00:42:53.061 [2024-10-12T23:53:38.639Z] =================================================================================================================== 00:42:53.061 [2024-10-12T23:53:38.639Z] Total : 9393.06 36.69 0.00 0.00 13531.62 10243.03 24078.41 00:42:53.061 { 00:42:53.061 "results": [ 00:42:53.061 { 00:42:53.061 "job": "nvme0n1", 00:42:53.061 "core_mask": "0x2", 00:42:53.061 "workload": "randread", 00:42:53.061 "status": "finished", 00:42:53.061 "queue_depth": 128, 00:42:53.061 "io_size": 4096, 00:42:53.061 "runtime": 1.013088, 00:42:53.061 "iops": 9393.063583815028, 00:42:53.061 "mibps": 36.691654624277454, 00:42:53.061 "io_failed": 0, 00:42:53.061 "io_timeout": 0, 00:42:53.061 "avg_latency_us": 13531.61837186493, 00:42:53.061 "min_latency_us": 10243.034074074074, 00:42:53.061 "max_latency_us": 24078.41185185185 00:42:53.061 } 00:42:53.061 ], 00:42:53.061 "core_count": 1 00:42:53.061 } 00:42:53.318 01:53:38 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:42:53.318 01:53:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:42:53.575 01:53:38 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:42:53.575 01:53:38 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:42:53.575 01:53:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:42:53.575 01:53:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:42:53.575 01:53:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:42:53.575 01:53:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:42:53.833 01:53:39 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:42:53.833 01:53:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:42:53.833 01:53:39 keyring_linux -- keyring/linux.sh@23 -- # return 00:42:53.833 01:53:39 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:53.833 01:53:39 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:42:53.833 01:53:39 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:53.833 01:53:39 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:42:53.833 01:53:39 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.833 01:53:39 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:42:53.833 01:53:39 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:42:53.833 01:53:39 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:53.833 01:53:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:42:54.091 [2024-10-13 01:53:39.474820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:42:54.091 [2024-10-13 01:53:39.475586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1627990 (107): Transport endpoint is not connected 00:42:54.091 [2024-10-13 01:53:39.476578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1627990 (9): Bad file descriptor 00:42:54.091 [2024-10-13 01:53:39.477577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:42:54.091 [2024-10-13 01:53:39.477598] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:42:54.091 [2024-10-13 01:53:39.477613] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:42:54.091 [2024-10-13 01:53:39.477641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:42:54.091 request: 00:42:54.091 { 00:42:54.091 "name": "nvme0", 00:42:54.091 "trtype": "tcp", 00:42:54.091 "traddr": "127.0.0.1", 00:42:54.091 "adrfam": "ipv4", 00:42:54.091 "trsvcid": "4420", 00:42:54.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:54.091 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:54.091 "prchk_reftag": false, 00:42:54.091 "prchk_guard": false, 00:42:54.091 "hdgst": false, 00:42:54.091 "ddgst": false, 00:42:54.091 "psk": ":spdk-test:key1", 00:42:54.091 "allow_unrecognized_csi": false, 00:42:54.091 "method": "bdev_nvme_attach_controller", 00:42:54.091 "req_id": 1 00:42:54.091 } 00:42:54.091 Got JSON-RPC error response 00:42:54.091 response: 00:42:54.091 { 00:42:54.091 "code": -5, 00:42:54.091 "message": "Input/output error" 00:42:54.091 } 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@33 -- # sn=457135699 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 457135699 00:42:54.091 1 links removed 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@33 -- # sn=71284944 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 71284944 00:42:54.091 1 links removed 00:42:54.091 01:53:39 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1842884 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1842884 ']' 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1842884 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1842884 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1842884' 00:42:54.091 killing process with pid 1842884 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@969 -- # kill 1842884 00:42:54.091 Received shutdown signal, test time was about 1.000000 seconds 00:42:54.091 00:42:54.091 Latency(us) 00:42:54.091 [2024-10-12T23:53:39.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:54.091 [2024-10-12T23:53:39.669Z] =================================================================================================================== 00:42:54.091 [2024-10-12T23:53:39.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:54.091 01:53:39 keyring_linux -- common/autotest_common.sh@974 -- # wait 1842884 00:42:54.387 01:53:39 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1842754 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1842754 ']' 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1842754 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1842754 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1842754' 00:42:54.387 killing process with pid 1842754 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@969 -- # kill 1842754 00:42:54.387 01:53:39 keyring_linux -- common/autotest_common.sh@974 -- # wait 1842754 00:42:54.673 00:42:54.673 real 0m5.189s 00:42:54.673 user 0m10.275s 00:42:54.673 sys 0m1.643s 00:42:54.673 01:53:40 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:54.673 01:53:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:42:54.673 ************************************ 00:42:54.673 END TEST keyring_linux 00:42:54.673 ************************************ 00:42:54.673 01:53:40 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:42:54.673 01:53:40 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:42:54.673 01:53:40 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:42:54.673 01:53:40 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:42:54.673 01:53:40 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:42:54.673 01:53:40 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:42:54.673 01:53:40 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:42:54.673 01:53:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:42:54.673 01:53:40 -- common/autotest_common.sh@10 -- # set +x 00:42:54.673 01:53:40 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:42:54.673 01:53:40 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:42:54.673 01:53:40 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:42:54.673 01:53:40 -- common/autotest_common.sh@10 -- # set +x 00:42:56.573 INFO: APP EXITING 00:42:56.573 INFO: killing all VMs 00:42:56.573 INFO: killing vhost app 00:42:56.573 INFO: EXIT DONE 00:42:57.947 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:42:57.947 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:42:57.947 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:42:57.947 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:42:57.947 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:42:57.947 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:42:57.947 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:42:57.947 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:42:57.947 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:42:57.947 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:42:57.947 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:42:57.947 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:42:57.947 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:42:57.947 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:42:57.947 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:42:57.947 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:42:57.947 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:42:59.320 Cleaning 00:42:59.320 Removing: /var/run/dpdk/spdk0/config 00:42:59.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:42:59.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:42:59.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:42:59.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:42:59.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:42:59.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:42:59.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:42:59.320 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:42:59.320 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:42:59.320 Removing: /var/run/dpdk/spdk0/hugepage_info 00:42:59.320 Removing: /var/run/dpdk/spdk1/config 00:42:59.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:42:59.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:42:59.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:42:59.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:42:59.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:42:59.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:42:59.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:42:59.320 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:42:59.320 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:42:59.320 Removing: /var/run/dpdk/spdk1/hugepage_info 00:42:59.320 Removing: /var/run/dpdk/spdk2/config 00:42:59.320 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:42:59.320 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:42:59.320 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:42:59.320 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:42:59.320 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:42:59.320 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:42:59.320 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:42:59.320 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:42:59.320 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:42:59.320 Removing: /var/run/dpdk/spdk2/hugepage_info 00:42:59.320 Removing: /var/run/dpdk/spdk3/config 00:42:59.320 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:42:59.320 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:42:59.320 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:42:59.320 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:42:59.320 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:42:59.320 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:42:59.320 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:42:59.320 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:42:59.320 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:42:59.321 Removing: /var/run/dpdk/spdk3/hugepage_info 00:42:59.321 Removing: /var/run/dpdk/spdk4/config 00:42:59.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:42:59.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:42:59.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:42:59.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:42:59.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:42:59.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:42:59.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:42:59.321 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:42:59.321 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:42:59.321 Removing: /var/run/dpdk/spdk4/hugepage_info 00:42:59.321 Removing: /dev/shm/bdev_svc_trace.1 00:42:59.321 Removing: /dev/shm/nvmf_trace.0 00:42:59.321 Removing: /dev/shm/spdk_tgt_trace.pid1462228 00:42:59.321 Removing: /var/run/dpdk/spdk0 00:42:59.321 Removing: /var/run/dpdk/spdk1 00:42:59.321 Removing: /var/run/dpdk/spdk2 00:42:59.321 Removing: /var/run/dpdk/spdk3 00:42:59.321 Removing: /var/run/dpdk/spdk4 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1460541 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1461288 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1462228 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1462558 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1463245 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1463385 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1464102 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1464149 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1464391 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1465689 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1466612 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1466928 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1467128 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1467338 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1467535 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1467696 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1467854 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1468157 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1468355 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1470842 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1471012 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1471172 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1471175 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1471576 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1471604 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1471913 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1472036 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1472210 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1472216 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1472507 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1472583 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1472991 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1473145 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1473354 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1476088 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1478602 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1485724 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1486137 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1488672 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1488876 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1491480 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1495208 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1497398 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1503700 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1509051 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1510434 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1511536 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1521927 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1524208 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1579869 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1583041 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1586862 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1590719 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1590721 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1591373 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1592033 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1592571 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1593096 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1593099 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1593356 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1593377 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1593495 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1594046 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1594701 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1595362 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1595757 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1595764 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1595910 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1596915 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1597645 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1603581 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1632326 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1635258 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1636433 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1637649 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1637789 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1637924 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1638064 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1638521 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1639822 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1640673 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1640989 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1642594 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1643018 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1643453 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1645718 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1649115 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1649116 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1649117 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1651220 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1653419 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1657558 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1680012 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1683411 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1687305 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1688132 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1689215 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1690181 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1693047 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1695361 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1699589 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1699596 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1702488 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1702628 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1702768 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1703031 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1703036 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1704232 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1705408 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1706584 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1707759 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1708939 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1710119 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1714052 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1714891 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1716291 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1717030 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1720755 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1722603 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1726023 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1729483 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1735970 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1740324 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1740326 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1753576 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1754103 00:42:59.321 Removing: /var/run/dpdk/spdk_pid1754511 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1754921 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1755502 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1755904 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1756404 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1756834 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1759219 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1759475 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1763205 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1763330 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1766692 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1769177 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1776087 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1776492 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1778991 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1779191 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1782382 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1786066 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1788108 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1794485 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1799681 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1800860 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1801527 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1811668 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1813822 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1815886 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1821475 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1821484 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1824385 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1825780 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1827177 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1827913 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1829441 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1830216 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1835490 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1835857 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1836251 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1837802 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1838081 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1838476 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1840919 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1840935 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1842395 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1842754 00:42:59.580 Removing: /var/run/dpdk/spdk_pid1842884 00:42:59.580 Clean 00:42:59.580 01:53:45 -- common/autotest_common.sh@1451 -- # return 0 00:42:59.581 01:53:45 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:42:59.581 01:53:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:59.581 01:53:45 -- common/autotest_common.sh@10 -- # set +x 00:42:59.581 01:53:45 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:42:59.581 01:53:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:42:59.581 01:53:45 -- common/autotest_common.sh@10 -- # set +x 00:42:59.581 01:53:45 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:42:59.581 01:53:45 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:42:59.581 01:53:45 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:42:59.581 01:53:45 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:42:59.581 01:53:45 -- spdk/autotest.sh@394 -- # hostname 00:42:59.581 01:53:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:42:59.839 geninfo: WARNING: invalid characters removed from testname! 00:43:31.902 01:54:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:34.428 01:54:20 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:37.709 01:54:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:41.893 01:54:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:45.172 01:54:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:48.453 01:54:33 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:52.634 01:54:37 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:43:52.634 01:54:37 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:43:52.634 01:54:37 -- common/autotest_common.sh@1691 -- $ lcov --version 00:43:52.634 01:54:37 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:43:52.634 01:54:37 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:43:52.634 01:54:37 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:43:52.634 01:54:37 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:43:52.634 01:54:37 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:43:52.634 01:54:37 -- scripts/common.sh@336 -- $ IFS=.-: 00:43:52.634 01:54:37 -- scripts/common.sh@336 -- $ read -ra ver1 00:43:52.634 01:54:37 -- scripts/common.sh@337 -- $ IFS=.-: 00:43:52.634 01:54:37 -- scripts/common.sh@337 -- $ read -ra ver2 00:43:52.634 01:54:37 -- scripts/common.sh@338 -- $ local 'op=<' 00:43:52.634 01:54:37 -- scripts/common.sh@340 -- $ ver1_l=2 00:43:52.634 01:54:37 -- scripts/common.sh@341 -- $ ver2_l=1 00:43:52.634 01:54:37 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:43:52.634 01:54:37 -- scripts/common.sh@344 -- $ case "$op" in 00:43:52.634 01:54:37 -- scripts/common.sh@345 -- $ : 1 00:43:52.634 01:54:37 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:43:52.634 01:54:37 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:52.634 01:54:37 -- scripts/common.sh@365 -- $ decimal 1 00:43:52.634 01:54:37 -- scripts/common.sh@353 -- $ local d=1 00:43:52.634 01:54:37 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:43:52.634 01:54:37 -- scripts/common.sh@355 -- $ echo 1 00:43:52.634 01:54:37 -- scripts/common.sh@365 -- $ ver1[v]=1 00:43:52.634 01:54:37 -- scripts/common.sh@366 -- $ decimal 2 00:43:52.634 01:54:37 -- scripts/common.sh@353 -- $ local d=2 00:43:52.634 01:54:37 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:43:52.634 01:54:37 -- scripts/common.sh@355 -- $ echo 2 00:43:52.634 01:54:37 -- scripts/common.sh@366 -- $ ver2[v]=2 00:43:52.634 01:54:37 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:43:52.634 01:54:37 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:43:52.634 01:54:37 -- scripts/common.sh@368 -- $ return 0 00:43:52.634 01:54:37 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:52.634 01:54:37 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:43:52.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.634 --rc genhtml_branch_coverage=1 00:43:52.634 --rc genhtml_function_coverage=1 00:43:52.634 --rc genhtml_legend=1 00:43:52.634 --rc geninfo_all_blocks=1 00:43:52.634 --rc geninfo_unexecuted_blocks=1 00:43:52.634 00:43:52.634 ' 00:43:52.634 01:54:37 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:43:52.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.634 --rc genhtml_branch_coverage=1 00:43:52.634 --rc genhtml_function_coverage=1 00:43:52.634 --rc genhtml_legend=1 00:43:52.634 --rc geninfo_all_blocks=1 00:43:52.634 --rc geninfo_unexecuted_blocks=1 00:43:52.634 00:43:52.634 ' 00:43:52.634 01:54:37 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:43:52.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.634 --rc genhtml_branch_coverage=1 00:43:52.634 --rc genhtml_function_coverage=1 00:43:52.634 --rc genhtml_legend=1 00:43:52.634 --rc geninfo_all_blocks=1 00:43:52.634 --rc geninfo_unexecuted_blocks=1 00:43:52.634 00:43:52.634 ' 00:43:52.634 01:54:37 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:43:52.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:52.635 --rc genhtml_branch_coverage=1 00:43:52.635 --rc genhtml_function_coverage=1 00:43:52.635 --rc genhtml_legend=1 00:43:52.635 --rc geninfo_all_blocks=1 00:43:52.635 --rc geninfo_unexecuted_blocks=1 00:43:52.635 00:43:52.635 ' 00:43:52.635 01:54:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:52.635 01:54:37 -- scripts/common.sh@15 -- $ shopt -s extglob 00:43:52.635 01:54:37 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:43:52.635 01:54:37 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:52.635 01:54:37 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:52.635 01:54:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.635 01:54:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.635 01:54:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.635 01:54:37 -- paths/export.sh@5 -- $ export PATH 00:43:52.635 01:54:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:52.635 01:54:37 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:43:52.635 01:54:37 -- common/autobuild_common.sh@486 -- $ date +%s 00:43:52.635 01:54:37 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728777277.XXXXXX 00:43:52.635 01:54:37 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728777277.ojgkz5 00:43:52.635 01:54:37 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:43:52.635 01:54:37 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:43:52.635 01:54:37 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:43:52.635 01:54:37 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:43:52.635 01:54:37 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:43:52.635 01:54:37 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:43:52.635 01:54:37 -- common/autobuild_common.sh@502 -- $ get_config_params 00:43:52.635 01:54:37 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:43:52.635 01:54:37 -- common/autotest_common.sh@10 -- $ set +x 00:43:52.635 01:54:37 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:43:52.635 01:54:37 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:43:52.635 01:54:37 -- pm/common@17 -- $ local monitor 00:43:52.635 01:54:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:52.635 01:54:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:52.635 01:54:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:52.635 01:54:37 -- pm/common@21 -- $ date +%s 00:43:52.635 01:54:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:52.635 01:54:37 -- pm/common@21 -- $ date +%s 00:43:52.635 01:54:37 -- pm/common@25 -- $ sleep 1 00:43:52.635 01:54:37 -- pm/common@21 -- $ date +%s 00:43:52.635 01:54:37 -- pm/common@21 -- $ date +%s 00:43:52.635 01:54:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728777277 00:43:52.635 01:54:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728777277 00:43:52.635 01:54:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728777277 00:43:52.635 01:54:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728777277 00:43:52.635 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728777277_collect-vmstat.pm.log 00:43:52.635 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728777277_collect-cpu-load.pm.log 00:43:52.635 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728777277_collect-cpu-temp.pm.log 00:43:52.635 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728777277_collect-bmc-pm.bmc.pm.log 00:43:53.202 01:54:38 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:43:53.202 01:54:38 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:43:53.202 01:54:38 -- spdk/autopackage.sh@14 -- $ timing_finish 00:43:53.202 01:54:38 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:53.202 01:54:38 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:43:53.202 01:54:38 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:53.202 01:54:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:43:53.202 01:54:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:43:53.202 01:54:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:43:53.202 01:54:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:53.202 01:54:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:43:53.202 01:54:38 -- pm/common@44 -- $ pid=1855744 00:43:53.202 01:54:38 -- pm/common@50 -- $ kill -TERM 1855744 00:43:53.202 01:54:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:53.202 01:54:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:43:53.202 01:54:38 -- pm/common@44 -- $ pid=1855746 00:43:53.202 01:54:38 -- pm/common@50 -- $ kill -TERM 1855746 00:43:53.203 01:54:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:53.203 01:54:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:43:53.203 01:54:38 -- pm/common@44 -- $ pid=1855748 00:43:53.203 01:54:38 -- pm/common@50 -- $ kill -TERM 1855748 00:43:53.203 01:54:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:43:53.203 01:54:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:43:53.203 01:54:38 -- pm/common@44 -- $ pid=1855779 00:43:53.203 01:54:38 -- pm/common@50 -- $ sudo -E kill -TERM 1855779 00:43:53.203 + [[ -n 1369595 ]] 00:43:53.203 + sudo kill 1369595 00:43:53.212 [Pipeline] } 00:43:53.227 [Pipeline] // stage 00:43:53.232 [Pipeline] } 00:43:53.246 [Pipeline] // timeout 00:43:53.251 [Pipeline] } 00:43:53.265 [Pipeline] // catchError 00:43:53.270 [Pipeline] } 00:43:53.286 [Pipeline] // wrap 00:43:53.292 [Pipeline] } 00:43:53.305 [Pipeline] // catchError 00:43:53.315 [Pipeline] stage 00:43:53.317 [Pipeline] { (Epilogue) 00:43:53.330 [Pipeline] catchError 00:43:53.332 [Pipeline] { 00:43:53.344 [Pipeline] echo 00:43:53.346 Cleanup processes 00:43:53.351 [Pipeline] sh 00:43:53.636 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:53.636 1855947 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:43:53.636 1856056 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:53.649 [Pipeline] sh 00:43:53.974 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:43:53.974 ++ grep -v 'sudo pgrep' 00:43:53.974 ++ awk '{print $1}' 00:43:53.974 + sudo kill -9 1855947 00:43:54.010 [Pipeline] sh 00:43:54.292 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:06.567 [Pipeline] sh 00:44:06.850 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:06.850 Artifacts sizes are good 00:44:06.864 [Pipeline] archiveArtifacts 00:44:06.871 Archiving artifacts 00:44:07.013 [Pipeline] sh 00:44:07.294 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:07.307 [Pipeline] cleanWs 00:44:07.317 [WS-CLEANUP] Deleting project workspace... 00:44:07.317 [WS-CLEANUP] Deferred wipeout is used... 00:44:07.324 [WS-CLEANUP] done 00:44:07.326 [Pipeline] } 00:44:07.343 [Pipeline] // catchError 00:44:07.354 [Pipeline] sh 00:44:07.634 + logger -p user.info -t JENKINS-CI 00:44:07.642 [Pipeline] } 00:44:07.666 [Pipeline] // stage 00:44:07.671 [Pipeline] } 00:44:07.685 [Pipeline] // node 00:44:07.690 [Pipeline] End of Pipeline 00:44:07.741 Finished: SUCCESS